Comments (9)
Thanks for reporting. I think #1484 is related. Do you know if the onnxscript version you have is the latest?
from onnxscript.
Thanks for linking the related issue. I tried with the latest onnxscript==0.1.0.dev20240515 version and I see the changes in the linked PR are in the 20240515 nightly version.
However, I'm still seeing the same error
from onnxscript.
cc @gramalingam
from onnxscript.
Hi @asfiyab-nvidia : can you attached the (unoptimized) onnx model here? That would be helpful. I believe that if the optimizer failes, it will still build an unoptimized onnx model. Thanks!
from onnxscript.
@justinchuby : while waiting for the model to repro, I wonder where "dtype((numpy.uint16, [('bfloat16', '<u2')]))" comes from ... it seems like ml_dtypes is a possible source for this? Even so, it doesn't add up ... I think ml_dtypes is used in the IR, right? But the constant-folding optimizer doesn't yet use the new IR ... oh, well, I guess I should try it out with the actual model.
from onnxscript.
Your are right that ml_dtypes doesn't kick in at this stage yet. It looks like a product from the reference evaluator (most likely due to a cast node). I suggest we use ml_dtypes in the reference evaluator (and across ONNX) as well.
from onnxscript.
You are right. The reference implementation does introduce this. That raises another question (which, I guess, is what motivates the second part of your answer): what bfloat16 encoding does the reference evaluator use? Is that a custom one that is conceptually a duplicate of the ml_dtypes one? I agree that it would be good to use a uniform encoding across all onnx tools/implementations.
from onnxscript.
The custom types for the ref evaluator are defined here: https://github.com/onnx/onnx/blob/88f8ef15cfaa3138d336f3502aed5018d802bf43/onnx/reference/custom_element_types.py#L8. They are simply byte representation that does not support any arithmetic operations.
With ml_dtypes computation will be supported, besides having the correct byte representation.
from onnxscript.
This should be addressed by onnx/onnx#6170.
from onnxscript.
Related Issues (20)
- [ONNX] Implement <OpOverload(op='aten.pixel_unshuffle', overload='default')>
- [torchlib] Implement <OpOverload(op='aten.repeat_interleave', overload='Tensor')>
- [torchlib] Implement <OpOverload(op='aten.scatter', overload='src')>
- [torchlib] Implement <OpOverload(op='aten.scatter', overload='value')>
- [torchlib] Implement <OpOverload(op='aten.silu', overload='default')>
- [torchlib] Implement <OpOverload(op='aten.sort', overload='default')>
- [torchlib] Implement <OpOverload(op='aten.std', overload='correction')>
- [torchlib] Implement <OpOverload(op='aten.std_mean', overload='correction')>
- [torchlib] Implement <OpOverload(op='aten.sym_size', overload='int')>
- [torchlib] Implement <OpOverload(op='aten.take', overload='default')>
- [torchlib] Implement <OpOverload(op='aten.unsafe_split', overload='Tensor')>
- [torchlib] Implement <OpOverload(op='torchvision.nms', overload='default')>
- [torchlib] Implement <OpOverload(op='torchvision.roi_align', overload='default')>
- [torchlib] Implement <OpOverload(op='torchvision.roi_pool', overload='default')>
- [exporter] Create a pass to turn tensors into external tensors
- [core] Migrate OpSignature to ONNX Script
- [exporter] Create an IR modularization pass
- [exporter] Create an inliner pass for the IR
- [IR] Create a utility for merging models
- Use IR in the optimizer HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxscript.