Giter VIP home page Giter VIP logo

libonnx's Introduction

xboot-logo


XBOOT简介(English Version)

       _                   _                     
 _  _ | |___ _____ _____ _| |_                   
\ \/ /|  _  |  _  |  _  |_   _|  (C) 2007-2023   
 )  ( | |_| | |_| | |_| | | |____JIANJUN.JIANG__ 
/_/\_\|_____|_____|_____| |_____________________|

操作一个GPIO,需要仔细对照芯片手册,好繁琐;每换一个主控芯片,所有工作从头来过;想开发个现代点支持各种动效的UI,发现几乎是不可能的事情;各种协议栈有如天书一样,阅读都困难,何谈编写;虚拟机技术很流行,功能很强大,想自己移植个,可是困难重重;还是放开自己吧,让XBOOT来替你解决这些问题。XBOOT不仅仅是一款功能强大、可移植性强、代码复用率高的嵌入式系统bootloader,而且还是一款SOC片上系统应用软件执行引擎,无需复杂的操作系统,APP上电直接执行。一次编写,到处运行,不仅仅是个口号,而且还是XBOOT存在的唯一原因。一些基本特性,简单列举如下:

  • 支持文件系统
  • 支持lua虚拟机
  • 支持各种协议栈
  • 支持矢量图形库,矢量字体
  • 支持各种现代GUI控件,以及动效
  • 多平台支持
  • 各种总线驱动,UART,I2C,SPI等等
  • 各种设备驱动,GPIO,PWM,IRQ,CLK,LED,BUZZER,VIBRATOR,WATCHDOG,RNG,FRAMEBUFFER,RTC等
  • 支持用lua编写应用软件,包含高等级API,可直接操作各种硬件抽象接口
  • 应用软件平台无关,一次编写,到处运行

linux-sandbox-game-2048

文档及开发工具

编译源码

Makefile中有两个变量在编译时需要传递,一个是交叉工具链,另一个是具体的硬件平台

变量 说明
CROSS_COMPILE 指定交叉工具链
PLATFORM 指定硬件平台,由两部分组成,archmach
  • Realview平台,qemu-system-arm模拟器

make clean
make CROSS_COMPILE=/path/to/arm-none-linux-gnueabihf- PLATFORM=arm32-realview
  • X86_64位linux系统下的sandbox

sandbox依赖与SDL2库,在编译前需要安装libsdl2-dev,以ubuntu系统为例:

sudo apt-get install libsdl2-dev
make clean
make CROSS_COMPILE="" PLATFORM=x64-sandbox

讨论组,大佬聚集,请踊跃加入

XBOOT官方QQ群:658250248 (2000人)


Xboot Introduction

It very tedious that we need careful read soc datasheet when operate GPIO on soc.We always repeat working when changed the soc.its almost almost impossible to We want develop a UI that support all kinds of magic motion.Stacks of protocols are like heavenly books ,it Reading is very hard , How do we program? Virtual machine technology is very popular and it function is powerful,but it very difficult to transplant it.so we build the Xboot.it can help us deal with these problems. XBOOT is not only a powerful, portable, and highly reusable, embedded system bootloader,but also on a piece of SOC system application software execution engine, without complex operating system, electricity directly executed on the APP。"Once written, running everywhere." It not just a slogan,but also the only reason for the existence of XBOOT. What's on the XBOOT?

  • Support file systems
  • Support lua virtual machine
  • Support many protocol stacks
  • Support graphics library, and vector font
  • Supports a modern GUI, and animations
  • Multi-platform support
  • Bus drivers, UART, I2C, SPI and so on
  • Device drivers, GPIO, PWM, IRQ, CLK, LED, BUZZER, VIBRATOR, WATCHDOG, RNG, FRAMEBUFFER, RTC, etc.
  • Support application using lua, which include high-level API, can operate a variety of hardware abstract interface directly
  • Application software platform has nothing to do, write once, run everywhere

Documents and Development Tools

Compile The Source Code

Makefile have two variables need to pass, one is cross toolchain, the other is a specific hardware platform

variable Description
CROSS_COMPILE The specical cross toolchain
PLATFORM The hardware platform, have two parts, arch and mach
  • Realview Platform,qemu-system-arm Emulator

make clean
make CROSS_COMPILE=/path/to/arm-none-linux-gnueabihf- PLATFORM=arm32-realview
  • Linux Sandbox On X86_64

The sandbox depends on the SDL2 library, you need to install libsdl2-dev before compile, as an example at the ubuntu system:

sudo apt-get install libsdl2-dev
make clean
make CROSS_COMPILE="" PLATFORM=x64-sandbox

Discussion Group, Many Big Brother, Please Join Us

XBOOT Official Tencent QQ Group: 658250248 (2000 people)

libonnx's People

Contributors

beru avatar jianjunjiang avatar reinforce-ii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libonnx's Issues

Which header files are necessary?

Hello!
I am working on a project where I have to deploy an inference engine in a very minimal environment where even the math library is not present. While compiling (onnxconf.h) for the platform I found out that the math.h header file was missing in the platform libraries. Here are some of the headers being used in that header file - onnxconf.h:

#include <stdio.h>
#include <stdlib.h>  
#include <stdint.h>  
#include <stddef.h>  
#include <string.h>  
#include <malloc.h>  
#include <float.h>  
#include <math.h>  
#include <list.h>  
#include <hmap.h>  

Also I have found that in this header file no math function has been used - I hope I am write on this! Correct me if I am wrong
So I was wondering if I could just remove this inclusion of math.h and still have a functional inference engine?

Just to make things simple compiling a math library would not be desirable but is possible.

Running test/model/test_mnist_8 issue

Hi,

When I run the test/model/test_mnist_8 once it works and I get a OKAY result.
I then re-run it and it FAILS.

Any suggestion why this might be and what to look for?

How to convert a `onnx_context` to a `unsigned char array` as in the hello world example?

In the main.c file in the examples/hello folder, how did you convert the MNSIT model to a unsigned char array and use it?:

#include <onnx.h>

static const unsigned char mnist_onnx[] = {
	0x08, 0x03, 0x12, 0x04, 0x43, 0x4e, 0x54, 0x4b, 0x1a, 0x05, 0x32, 0x2e,
	0x35, 0x2e, 0x31, 0x22, 0x07, 0x61, 0x69, 0x2e, 0x63, 0x6e, 0x74, 0x6b,
	0x28, 0x01, 0x3a, 0xb2, 0xce, 0x01, 0x0a, 0x62, 0x0a, 0x0c, 0x50, 0x61,
	0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x31, 0x39, 0x33, 0x0a, 0x1b,
	0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x31, 0x39, 0x33,

If you have any code to do it can you share it?

Fail to compile on macOs

Hi, there.

I came across this project and try to compile it on MacOs. But fail with the following error.

main.c:90:47: warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long') [-Wformat]
printf("%-32s %ld %12.3f(us)\r\n", e->key, p->count, (p->count > 0) ? ((double)p->elapsed / 1000.0f) / (double)p->count : 0);
~~~ ^~~~~~~~
%llu
1 warning generated.
[LD] Linking benchmark
ld: library not found for -lcrt0.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [benchmark] Error 1
make[1]: *** [all] Error 2
make: *** [all] Error 2

Maxpool + dilation

This is really a question, I don't think there is a bug here, just something I'm not understanding.

I'm looking at the code for maxpool and how it handles dilations. The spec has this example:

"""
input_shape: [1, 1, 4, 4]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
    'MaxPool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
    strides=[1, 1],
    dilations=[2, 2]
)
x = np.array([[[
    [1, 2, 3, 4],
    [5, 6, 7, 8],
    [9, 10, 11, 12],
    [13, 14, 15, 16],
]]]).astype(np.float32)
y = np.array([[[
    [11, 12],
    [15, 16]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_maxpool_2d_dilations')

This should implicitly use AUTO_PAD_NOTSET. Now what I tried is getting the MaxPool_float32 to give the [ 11, 12, 15, 16 ] result by hardcoding the inputs, for the full code + output see this godbolt:

int strides[] = { 1, 1 };
int kernels[] = { 2, 2 };
int cpads[] = { 0, 0, 0, 0 };

int x_ndim = 4;
int x_dims[] = { 1, 1, 4, 4 };
int y_dims[] = { 1, 1, 2, 2 };

From my code reading, the dilation is only used to determine the output dimensions, which I've hardcoded here.

But with these inputs I get the incorrect output:

6.000000 7.000000 10.000000 11.000000

So, what is the way that dilations influence the end result that I am missing?

Hello model RAM size required

Hi,

I'm trying to run the hello example on a small embedded system but im unsure of the memory required to allocate this model ( when running onnx_context_alloc).

I have roughly 2MB, is that enough?
Is there a smaller model that I can test with the model defined as a const char array?
Like the static const unsigned char mnist_onnx[] = { ... }

Can this software run on MacOS

I came across an error when compling it on my Mac.

[CC] helper.c
In file included from helper.c:28:
./helper.h:13:10: fatal error: 'malloc.h' file not found
#include <malloc.h>
^~~~~~~~~~
1 error generated.
make[1]: *** [helper.o] Error 1
make: *** [all] Error 2

isnan and isinf issue

Hi,

When I compile I get the following error with clang. I then try to link the library and it complains. Not quite sure why, I added in the main.c isnan and isinf and compiles ok. -lm is added to the linker.

Library compilation:
default/IsNaN.c:34:11: warning: implicit declaration of function 'isnanf' is invalid in C99 [-Wimplicit-function-declaration] py[i] = isnanf(v) ? 1 : 0;

Linker:
libonnx.a(.text+0x598): undefined reference to isnanf'

Why does every test in the model, node and simple folders fail?

I have compiled libonnx on a fresh installation of Ubuntu - installed all the prerequisites ran just make and tried running the tests one by one. But I find the every test that I run fails and I am not able to figure out why.

Here is what I did:

  • Installed the latest LTS release of Ubuntu
  • Installed make, build-essential, git and libsdl2-gfx (Did do other stuff but those would not mess with this)
  • Ran make all to compile
  • Ran the tests on many examples: ./tests ./model/mnist_8/
  • But every test that I have tried has just failed!
$ ./TESTING/libonnx/tests/tests ./TESTING/libonnx/tests/model/mnist_8/
[test_data_set_0]                                                                       [FAIL]
[test_data_set_1]                                                                       [FAIL]
[test_data_set_2]                                                                       [FAIL]
  • All those in the simple and model folders fail but those in pytorch-* succeed partially
  • Is this because of a missing operator? (no Unsupported opset message has been displayed as in the pytorch tests)
  • Nonetheless the example for handwriting recognition has identified the number correctly most of the time

Support GRU?

I find that the GRU.c is nearly empty. The libonnx doesn't support GRU now, does it?

Unsupported opset => Gather-11 (ai.onnx)

IR Version: v6
Producer: pytorch 1.11.0
Domain:
Imports:
ai.onnx v11
Conv_0: Conv-11 (ai.onnx)
Inputs:
input.1: float32[1 x 3 x 352 x 352] = [...]
onnx::Conv_760: float32[24 x 3 x 3 x 3] = [...]
onnx::Conv_761: float32[24] = [...]
Outputs:
input.4: float32[1 x 24 x 176 x 176] = [...]

...

Concat_264: Concat-11 (ai.onnx)
Inputs:
onnx::Concat_744: float32[1 x 1 x 22 x 22] = [...]
onnx::Concat_960: float32[1 x 4 x 22 x 22] = [...]
onnx::Concat_757: float32[1 x 80 x 22 x 22] = [...]
Outputs:
758: float32[1 x 85 x 22 x 22] = [...]
110592
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Gather-11 (ai.onnx)
Unsupported opset => Resize-11 (ai.onnx)
Unsupported opset => Pad-11 (ai.onnx)

Issue:
Using https://github.com/dog-qiuqiu/FastestDet Fastest yolo, unsupported opset. Onnx model in /FastestDet/example/onnx-runtime.

算子不支持,在yolotiny测试中没问题,采用另一种轻量级模型出现该问题,想问下老哥是算子实现的问题还是别的。

Failed to load 'yolov5n.onnx'.

hi,
I try to load 'yolov5n.onnx' like this:

#include "onnx.h"

int main(void)
{
    struct onnx_context_t *sess = onnx_context_alloc_from_file("yolov5n.onnx", NULL, 0);
    onnx_context_dump(sess, 1);
    return 0;
}

but nothing was output, including warnings and errors.

Implementation of Upsamle function

Hi,

In src/default folder, we could found the implementation C code of most operators(e.g. conv, matmul and etc. ), however, we could not found the implementation C code of Upsamle.

where is the implementation C code of Upsample?

Python bindings

It'd be nice to have python bindings, since onnxruntime by Micro$oft has telemetry and so it is a bit unethical to depend on it. Fortunately there can be a thin abstraction layer.

To maximize compatibility to various python impls and to spare users from compiling the lib themselves it may make sense to implement it via ctypes. There are packages generating ctypes bindings from headers automatically, but usually they need extensive postprocessing.

Valgrind output for Yolo v2 model

I downloaded tiny yolo v2 model from https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/tiny-yolov2
And when inferencing this, I got those outputs from Valgrind

==178736== Invalid read of size 1
==178736== at 0x162DF9: shash (onnxconf.h:146)
==178736== by 0x162F11: MaxPool_init (MaxPool.c:38)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid write of size 1
==178736== at 0x1154F1: onnx_attribute_read_string (onnx.c:1747)
==178736== by 0x162F09: MaxPool_init (MaxPool.c:38)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid read of size 1
==178736== at 0x13BEB8: shash (onnxconf.h:146)
==178736== by 0x13BFD1: Conv_init (Conv.c:43)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== Invalid write of size 1
==178736== at 0x1154F1: onnx_attribute_read_string (onnx.c:1747)
==178736== by 0x13BFC9: Conv_init (Conv.c:43)
==178736== by 0x113FF0: onnx_graph_alloc (onnx.c:1238)
==178736== by 0x10FCFA: onnx_context_alloc (onnx.c:102)
==178736== by 0x10FF35: onnx_context_alloc_from_file (onnx.c:145)

==178736== ERROR SUMMARY: 30 errors from 4 contexts (suppressed: 0 from 0)

This project need SDL2.

At first , I got error when complie this project:
main.c:1:10: fatal error: SDL2/SDL.h: 没有那个文件或目录
1 | #include <SDL2/SDL.h>
| ^~~~~~~~~~~~
then I use sudo apt-get install libsdl2-gfx-dev fixed this error.

I think this project should explain this dependence.

THANKS!

Tensorflow model with opset 12 seems to crash when loaded

I have a model converted from Tensorflow that uses opset 12. (using tf2onnx.convert)
The model opens fine in Netron and elsewhere but crashes somewhere in Concat_reshape when I try to load it with onnx_context_alloc_from_file. I tried compiling for both x86 and x64 with the same result.

Here are the model properties as viewed through Netron:
image

Opening the models supplied in the libonnx test directory seemed to work fine. Do you have any suggestions for how to get this working? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.