Giter VIP home page Giter VIP logo

onnx-modifier's People

Contributors

fengwang avatar ice-tong avatar lewis-lu avatar loseall avatar zhangge6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-modifier's Issues

Script failed to load - Windows

Hello @ZhangGe6 ,

I appreciate your amazing and helpful work.

I am having an issue when opening the model in windows app. I am getting the following error when trying to open a basic mobilenetv2-7.onnx:

The script 'http://... /static/onnx.js' failed to load.

Thank you in advance for your help !

生成的模型无效

可以加节点, 但是生成的模型是无效的, 填入的节点的值似乎没有起到作用. 导出的模型用netron看新增的节点是没有值的(正常的节点能看到值), 用onnxruntime使用导出的模型推理会报错

onnx flask有版本限制吗?

执行

python3.8 app.py

得到输出:

 * Serving Flask app 'app' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000 (Press CTRL+C to quit)
127.0.0.1 - - [09/May/2022 16:22:44] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/view-grapher.css HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/view-sidebar.css HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/sweetalert.css HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/sweetalert.min.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/dagre.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/base.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/text.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/json.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/xml.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/python.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/protobuf.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/flatbuffers.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/zip.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/gzip.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/tar.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/view-grapher.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/view-sidebar.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/view.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/index.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:44] "GET /static/logo.svg HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:45] "GET /favicon.ico HTTP/1.1" 404 -
127.0.0.1 - - [09/May/2022 16:22:50] "GET /static/onnx.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:50] "GET /static/onnx-proto.js HTTP/1.1" 304 -
127.0.0.1 - - [09/May/2022 16:22:51] "GET /static/onnx-metadata.json HTTP/1.1" 304 -
[2022-05-09 16:22:51,596] ERROR in app: Exception on /open_model [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "app.py", line 15, in open_model
    onnx_modifier = onnxModifier.from_name_stream(onnx_file.filename, onnx_file.stream)
  File "/Users/dengxuezheng/BeKe/tf/onnx/onnx-modifier/onnx_modifier.py", line 26, in from_name_stream
    model_proto = onnx.load_model(stream, onnx.ModelProto)
  File "/Users/dengxuezheng/Library/Python/3.8/lib/python/site-packages/onnx/__init__.py", line 124, in load_model
    model_filepath = _get_file_path(f)
  File "/Users/dengxuezheng/Library/Python/3.8/lib/python/site-packages/onnx/__init__.py", line 54, in _get_file_path
    return os.path.abspath(f.name)
  File "/usr/local/Cellar/[email protected]/3.8.13_1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/posixpath.py", line 374, in abspath
    path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not int
127.0.0.1 - - [09/May/2022 16:22:51] "POST /open_model HTTP/1.1" 500 -
[2022-05-09 16:24:02,033] ERROR in app: Exception on /download [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "app.py", line 26, in modify_and_download_model
    onnx_modifier.reload()   # allow for downloading for multiple times
NameError: name 'onnx_modifier' is not defined
127.0.0.1 - - [09/May/2022 16:24:02] "POST /download HTTP/1.1" 500 -

执行python3.8 app.py 出错,大神帮忙看看

Cannot add reshaper

When try to add reshaper, don't know how to add static shape for shape input
Feel not correct
reshaper

Add a Cast node and modify to attribute but keep roll back to NaN

I create a Cast node. to attribute is undefined initially.
I change to float32, float, Float, Float32, FLOAT, ... etc.
No matter what I put, it show NaN later

I change to 1 and it shows float32 later
I thought I make it. but when I load new model with netron, it say unknown type "1"

Really don't know how to add Cast properly

修改batchSize实际上不起作用

场景描述:将原来onnx模型batch_size改为1(即输入尺寸由[batch_size, 3, 224, 224]改为[1, 3,224,224]),nerton可视化结果符合预期,但在解析modified onnx模型时通过onnx::ValueInfoProto获取输入尺寸,如tensorInfo.shape().dim(i).dim_value(),结果和batch_size相同,均为[0,3,224,224], 模型为开源mobilenet onnx

可视化
原始模型
image

修改batch_size=1后模型
image

无法成功插入slice节点,希望增加修改输入通道数的功能

作者你好,我的情况是这样的,我的模型是在灰度图上训练的,所以输入的通道数只有1,现在我要把输入的通道数变成3,这样才能享受TNN的加速,但我又不想重新制作数据集和训练,所以我想到的是:第一步,修改输入的通道数从1变成3,即[1,1,192,192]变成[1,3,192,192];第二步,在输入后面插入一个slice算子,只选取三个通道中的第一个通道。这样我就不用重新训练模型了。
现在我跳过第一步,先尝试第二步插入slice 算子,但遇到了以下问题:
我已经修改了slice的输入和输出
image
也修改了第一个卷积的输入
image
可slice算子还是无法成功地与第一个卷积连接。
我使用的onnx模型是:model.zip

请问怎么传维度

我在使用reshape的时候,发现无法定义shape的大小,请问这点是什么处理的呢?谢谢

在多输出模型中删除多余输出节点报错:AttributeError: module 'onnx' has no attribute 'shape_inference'

问题

原始onnx模型存在多余输出节点,原始模型输出如下图1所示,希望将输出名称为Yaw,220, 221之外的所有节点都删除。故将其余输出节点在gemm部分删除,删除成功后输出节点如下图2所示,但点击download后报错如图3所示。

原始输出节点-图1
image
删除后输出节点-图2
image
报错现象-图3
image

具体报错log

127.0.0.1 - - [29/Jan/2023 16:31:52] "POST /open_model HTTP/1.1" 200 -
[2023-01-29 16:38:03,331] ERROR in app: Exception on /download [POST]
Traceback (most recent call last):
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
    raise value
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/cxt/miniconda3/envs/pytorch1.9/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "app.py", line 27, in modify_and_download_model
    onnx_modifier.modify(modify_info)
  File "/Users/cxt/Reposities/onnx-modifier/onnx_modifier.py", line 302, in modify
    self.post_process()
  File "/Users/cxt/Reposities/onnx-modifier/onnx_modifier.py", line 278, in post_process
    shape_inference()
  File "/Users/cxt/Reposities/onnx-modifier/onnx_modifier.py", line 266, in shape_inference
    inferred_shape_info = onnx.shape_inference.infer_shapes(self.model_proto)
AttributeError: module 'onnx' has no attribute 'shape_inference'

new output after Conv layer

Is it possible to add new "out_put layer", for instance, output after a convolution layer.
Sometimes, we might want to check feature maps from the convolution layer as for image recognition.

I'd like to get such output. I found such way for Pytorch and Keras/Tensorflow.
I'd like to do the same thing using onnx model in onnxRT by cutting(?) and modifing the onnx model.

I can not find tutorial of it. If there exists such exsamples, please teach me.

如何增加INPUT呢?

比如我现在有一个ADD节点,现在有两个输入,我想变成三个输入,我改如何增加呢

精度问题

增加的节点上填入的值与导出的值不一致, 是类型转换导致的精度截断?

数据type仍存在问题

7396a901f7209ae931dcceb7f2a3f93

删除一个reshape节点后,修改了transpose的属性,但是最后输出的type存在问题。

删除子分支后,分支上的节点变成了inputs。

1,原始模型删除子分支,如下图:
screenshot_24

2,保存后打开修剪过的模型,可以看到inputs变多了,如下图:
screenshot_25

3,我是通过如下代码对模型的inputs进行了删除:

import onnx                         

inputpath = "./descriptor.onnx"

# 1,修改模型
model = onnx.load(inputpath)
inputs = model.graph.input
inputs_num = 1  # 原始网络的输入会排在列表的最前面,所以调用pop函数删除冗余输入
for i in range(len(inputs)-inputs_num):
    inputs.pop()
print(inputs)

# 2,检查结构并保存模型(覆盖保存)
onnx.checker.check_model(model)
onnx.save(model, inputpath)

initializer无法修改

image
image

我按照您的办法修改initializer,修改之后download,但新的onnx中仍未修改。

reconnect feature is unspported

for example:
a -> b
b -> c

I modified the output of node a to the input of node c, but it will not reconnect between a and c.

Graph modified but output is not giving

Hi @ZhangGe6
First of all thanks for this wonderful tool.
Actually i modified my graph using this tool and when i am viewing in netron it is proper but the node which i added is giving no output means i can't see what shape it output or what type it outputs it just showing the output name as text. can you please help me to fix this?

Thanks in Advance:)

添加sub节点,导出报错

模型的均值是[0.5,0.5,0.5],我尝试在onnx的input0后面添加sub节点来实现减均值的操作。但一直没有成功。
下图是正确的onnx的参数:
image

我把上图中的正确参数:参数和type都复制过来,一直提示错误。

image

image

请教一下:该如何处理呢?【版本是今天下载的新版本】

macOS 12.5.1 端口有冲突

设备:Macbook Air M1
运行报错:

  • Serving Flask app 'app'
  • Debug mode: on
    Address already in use
    Port 5000 is in use by another program. Either identify and stop that program, or start the server with a different port.
    On macOS, try disabling the 'AirPlay Receiver' service from System Preferences -> Sharing.

修改模型输入名时报错

image
作者你好,我在用最新版的时候,修改了模型输入名之后点击download按钮报错如下:
image
期待您的回复!

Unable to download model after changes

@ZhangGe6 This tool is exactly what I was looking for but unfortunately, after I make my changes on the development server I get the error that on line 27 of app.py, that onnx_modifier is not defined.

Traceback (most recent call last):
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 2091, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 2076, in wsgi_app
    response = self.handle_exception(e)
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 2073, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 1519, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 1517, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/travis/p38_venv/lib/python3.8/site-packages/flask/app.py", line 1503, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/home/travis/memryx_dev/onnx-modifier/app.py", line 27, in modify_and_download_model
    onnx_modifier.reload()   # allow downloading for multiple times
NameError: name 'onnx_modifier' is not defined

[Bug Report] aotumatically add a tensor type to model

Hi, I met a bug-like problem here.

Suppose I open a quantized onnx model in onnx-modifier and download it directly. In that case, it will add an explicit tensor type to some ops, such as QuantizeLinear, even with nothing modified.

The figure below shows the explicit type added by onnx-modifier automatically and there is no type information in the original model at the same tensor.
image

  • ref onnx: link
    onnx-modifier will add type 'uint8' to the outputs of all QuantizeLinear ops in this onnx model.

We hope that onnx-modifier will keep everything that isn't modified the same as before. Thanks!

无法在输出节点前增加新节点

作者你好,我想在输出节点前增加一个argmax节点,该节点位于Add和heatmap之间,最后还要把模型输出名从heatmap重命名成indexmap,如下图所示:
image
但是我在操作过程中遇到问题,就是Add节点的输出张量名我无法将其从indexmap修改成heatmap(改成其他名字也不可以),好像Add的输出张量名跟模型的输出节点名绑定死了,输出节点名是什么,Add的输出张量名就是什么,见下面视频的后面部分
https://user-images.githubusercontent.com/43233772/220515357-088b3043-770a-4009-86f3-ff40b977f8e7.mp4
模型:
model.zip

How to prohibit auto-translation to chinese!

Hi,there
I've encountered a problem that once I open the app on windows, everything would be ugly translated into chinese. How could I prevent this from happening?

problem

Installation by method 2: "launch from executable file"

Wish for solutions!

怎么设置argmax 输出output的类型

image
argmax是我添加的节点,原本只有一个输出outputs,我想设置argmax的结果output为

float32[1,1,512,512]

但是无论我在这个框里写什么 ,他都没法正确生成output。想请教应该怎么写。

how to delete constant nodes in onnx?

As shown in pictures below, when open an onnx modified by onnx-modifier with netron, constant nodes are found. How could I delete these constant nodes with onnx-modifier?

constant_nodes

Sincerely, wish for reply!!

[Bug Report] Does onnx-modifier support modification on scalar initializer?

I got an onnx with QuantizeLienar and DequantizeLinear nodes in it, and the scale of these nodes are scalar.

When I try to modify the scale initializer to a scalar value like below, the scale initializer becomes 0 in the downloaded modified onnx model.

  • modification process:
    image
  • modified onnx model:

image

So I wonder if onnx-modifier support modification on scalar initializer. Thanks!

Support for conv node

添加conv这样的节点,weight和bias这样的参数是不支持赋值的,是否可以考虑添加一下Initializer节点以支持修改weight和bias这样的参数?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.