Comments (3)
Based on a quick look, the code does not touch fields like ir_version when inlining. It seems unlikely to be happening during inlining. Is there an onnx model that causes a problem for reproduction?
from onnx.
Every model used in unit tests usuaaly specifies the ir_version to avoid onnxruntime to fail because it does not support the new one yet. Any model already used in the existing unit tests is fine. This is how I found it: https://github.com/sdpython/onnx-extended/blob/main/_unittests/ut_tools/test_onnx_inline.py#L58.
from onnx.
@xadupre : I looked at the example in the above link. It is an invalid ONNX model: subgraphs cannot use outer-scope names as the output-names (and expect those values to be returned). Instead, it is expected that the subgraph will have at least one node such as subgraph_output = Identity(outer_scope_name)
.
I realize you do run the checker. I am guessing it doesn't catch it. If so, we should fix it in the checker and produce an error message.
I don't think this is connected to IR version.
from onnx.
Related Issues (20)
- Shape inference segfaults when adding tensors of shapes `(0,)` and `(1,)` HOT 1
- Build Error: Protobuf pinned version missing `<cstdint>` definition on its headers
- Refactor the subbyte module
- No Adapter From Version $14 for Mul HOT 3
- Deprecation / Update Policy for onnx dependencies? HOT 2
- How to parse a function with variadics? HOT 5
- Split onnx model in architecture and weights HOT 2
- a) Feature Request: Function sample_dirichlet, b) State of probabilistic model support? HOT 3
- Using onnx shape inference some operator doesn't support shape inference HOT 1
- Convert a model with custom pytorch CUDA kernel HOT 1
- Want to substitute Expand operator with some other operator in ONNX due to compatibility issues with hardware HOT 1
- Squeeze-11's output shape is mistakenly inferred when input has dynamic axes and squeezing axes is not specified
- NMS Operator Output Different From Torchvision Implementation HOT 1
- n-bit data type HOT 1
- Model split failure using onnx.utils.extract_model HOT 3
- Change the example in the documentation of Transpose HOT 1
- Python 3.13 support HOT 1
- Can't reduce the version of Softmax from $13 to $12 HOT 2
- Unexpected floatBuffer output with onnxModel that substract 2 input floatBuffer HOT 1
- [Feature request] Better support for large models (>2GB) in extract_model HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnx.