Giter VIP home page Giter VIP logo

Comments (9)

younghuvee avatar younghuvee commented on September 26, 2024

help!

from onnx.

xadupre avatar xadupre commented on September 26, 2024

It is difficult to tell without more information. I assume you cannot share the model. Is it possible to share the exact error you see?

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

https://github.com/younghuvee/comer

this my model and inference code, issue appear in decoder model

first loop result

torch
torch1
onnx
onnx1

second loop

torch
torch2
onnx
onnx2

there is my onnx infer code

import os
import onnx
import onnx.helper as helper
import onnxruntime
from collections import OrderedDict
import  numpy as np

def onnxruntime_infer(onnx_path, output_name="output"):
    src_inp = []
    with open('./bin/1.log', 'r') as file:
        for line in file:
            src_inp.append((float)(line))
    mask_inp = []
    with open('./bin/2.log', 'r') as file:
        for line in file:
            mask_inp.append((float)(line))


    input_id = []
    input_id.append(1)
    input_id.append(20)
    input_id.append(2)
    input_id.append(28)

    src_inp = np.array(src_inp, dtype=np.float32).reshape(2, 13, 50, 256)
    mask_inp = np.array(mask_inp, dtype=np.bool_).reshape(2, 13, 50)
    input_id = np.array(input_id, dtype=np.int32).reshape(2, 2)

    session = onnxruntime.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
    input_name= [session.get_inputs()[0].name]
    input_name.append(session.get_inputs()[1].name)
    input_name.append(session.get_inputs()[2].name)
    outputs = [x.name for x in session.get_outputs()]
    print("onnx input_name:", input_name)
    print("onnx outputs:", outputs)


    ort_outs = session.run(None, {"input1": src_inp, "input2": mask_inp, "input3": input_id})

    ort_outs = OrderedDict(zip(outputs, ort_outs))
 

    for key in ort_outs:
        val = ort_outs[key]
        print(val)

     
if __name__ == '__main__':

    base_path = "./"
    onnx_file = os.path.join(base_path,"./data/model/decoder_0521_ds.onnx")


    onnxruntime_infer(onnx_file)   

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

It is difficult to tell without more information. I assume you cannot share the model. Is it possible to share the exact error you see?

need your help, thank you!

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

1

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

is it a onnx‘s bug?

from onnx.

xadupre avatar xadupre commented on September 26, 2024

Is it possible to show where this mechanism is implemented in your code this model necessitates cyclic inference, with the size of the third input node expanding with each inference and how you did it with onnx?

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

Is it possible to show where this mechanism is implemented in your code this model necessitates cyclic inference, with the size of the third input node expanding with each inference and how you did it with onnx?

I set dynamic input when converting the model
torch.onnx.export(decoder, (src[0].to(device), src_mask[0].to(device), input_ids.to(device)), "decoder_0520_ds.onnx", input_names=["input1","input2","input3"], output_names=["output"], dynamic_axes={"input1":{2:"input_width"},"input2":{2:"input_width"}, "input3":{1:"length"}})

The inference code is as follows

` input_id = []
input_id.append(1)
input_id.append(20)
input_id.append(2)
input_id.append(28)

src_inp = np.array(src_inp, dtype=np.float32).reshape(2, 13, 50, 256)
mask_inp = np.array(mask_inp, dtype=np.bool_).reshape(2, 13, 50)
input_id = np.array(input_id, dtype=np.int32).reshape(2, 2)

session = onnxruntime.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
input_name= [session.get_inputs()[0].name]
input_name.append(session.get_inputs()[1].name)
input_name.append(session.get_inputs()[2].name)
outputs = [x.name for x in session.get_outputs()]
print("onnx input_name:", input_name)
print("onnx outputs:", outputs)


ort_outs = session.run(None, {"input1": src_inp, "input2": mask_inp, "input3": input_id})`

The above is the test code I developed to verify the consistency of the model, and I have not developed a complete ONNX based inference code. I just used onnx as an intermediary for transforming the model, and ultimately used MNN. In this process, I found that, The reasoning results of MNN are consistent with ONNX, There is inconsistency between ONNX and pytorch.

from onnx.

younghuvee avatar younghuvee commented on September 26, 2024

Is it possible to show where this mechanism is implemented in your code this model necessitates cyclic inference, with the size of the third input node expanding with each inference and how you did it with onnx?

I set dynamic input when converting the model torch.onnx.export(decoder, (src[0].to(device), src_mask[0].to(device), input_ids.to(device)), "decoder_0520_ds.onnx", input_names=["input1","input2","input3"], output_names=["output"], dynamic_axes={"input1":{2:"input_width"},"input2":{2:"input_width"}, "input3":{1:"length"}})

The inference code is as follows

` input_id = [] input_id.append(1) input_id.append(20) input_id.append(2) input_id.append(28)

src_inp = np.array(src_inp, dtype=np.float32).reshape(2, 13, 50, 256)
mask_inp = np.array(mask_inp, dtype=np.bool_).reshape(2, 13, 50)
input_id = np.array(input_id, dtype=np.int32).reshape(2, 2)

session = onnxruntime.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
input_name= [session.get_inputs()[0].name]
input_name.append(session.get_inputs()[1].name)
input_name.append(session.get_inputs()[2].name)
outputs = [x.name for x in session.get_outputs()]
print("onnx input_name:", input_name)
print("onnx outputs:", outputs)


ort_outs = session.run(None, {"input1": src_inp, "input2": mask_inp, "input3": input_id})`

The above is the test code I developed to verify the consistency of the model, and I have not developed a complete ONNX based inference code. I just used onnx as an intermediary for transforming the model, and ultimately used MNN. In this process, I found that, The reasoning results of MNN are consistent with ONNX, There is inconsistency between ONNX and pytorch.

input_id: first loop size is (2, 1), and result is same as pytorch, second loop size is (2, 2), result is different from pytorch's result, So I manually set its input

from onnx.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.