Comments (11)
输出不要 resize
m_mnnNet_decoder->resizeTensor(output_vector, {2, input_ids_size, 46});
from mnn.
输出不要 resize
m_mnnNet_decoder->resizeTensor(output_vector, {2, input_ids_size, 46});
试过了,没什么变化
from mnn.
直接把onnx模型按照需要的size导出,不设置dynamic_size的话,推理结果是正确的
from mnn.
设置 dyamic_size 后导出 onnx ,然后按指定输入用 testMNNFromOnnx.py 测试结果如何?
from mnn.
int i_modelW2 = input_img->width();
int i_modelH2 = input_img->height();
int i_modelC2 = input_img->channel();
int i_modelB2 = input_img->batch();
int i2_modelW2 = input_mask->width();
int i2_modelH2 = input_mask->height();
int i2_modelC2 = input_mask->channel();
int i2_modelB2 = input_mask->batch();
int m_modelW2 = input_ids->width();
int m_modelH2 = input_ids->height();
int m_modelC2 = input_ids->channel();
int m_modelB2 = input_ids->batch();
int o_modelW2 = output_vector->width();
int o_modelH2 = output_vector->height();
int o_modelC2 = output_vector->channel();
int o_modelB2 = output_vector->batch();
这一段有点问题,非四维不要用 width/height 等,用 length(0) , length(1) , length(2)
from mnn.
::memcpy(input_1->writeMap(), src_mask.data(), src_mask.size() * sizeof(bool));
这个 bool 都换成 int32_t
from mnn.
设置 dyamic_size 后导出 onnx ,然后按指定输入用 testMNNFromOnnx.py 测试结果如何?
结果如图,这个误差应该是正确的,不太大,C++里面的误差非常大
from mnn.
是不是resizeSession产生的错误呢
from mnn.
设置 dyamic_size 后导出 onnx ,然后按指定输入用 testMNNFromOnnx.py 测试结果如何?
结果如图,这个误差应该是正确的,不太大,C++里面的误差非常大
但是 MNN的推理结果是和pytorch比的,这个结果是mnn和onnx比的
from mnn.
1
from mnn.
设置 dyamic_size 后导出 onnx ,然后按指定输入用 testMNNFromOnnx.py 测试结果如何?
结果如图,这个误差应该是正确的,不太大,C++里面的误差非常大
这个误差挺大的。更新到 2.9.0 测试下,仍然有问题的话发一下 onnx
from mnn.
Related Issues (20)
- MNN Llama-3-8B-Instruct export 失败 HOT 1
- Cut MNN build size according to operators to use HOT 3
- 添加完Mutlihead attention 算子后,报错Reshape41 算子异常 HOT 1
- meta-llama3-8b-instruct 使用llm_demo 推理报错 HOT 2
- onnx转mnn之后模型推理和原模型差距太大 HOT 3
- phi2 使用 llm_demo 推理报错 HOT 3
- 有转换mobilenet V3成功的老铁吗?转为mnn之后准确率下降很多 HOT 1
- 量化后输出怎么从float改成int8
- 求助!!!!!!!!打印 input_tensor内容全部都是0 HOT 1
- fastOnnxTest 成功 但使用时输出不一致 HOT 2
- 编译示例工程代码报错 HOT 1
- 编译测试工具报错阻塞 HOT 4
- 问一下, 在armv8.2 cpu 手机上, 是否已支持fp16 进行卷积的推理阿? HOT 1
- c++ 链接MNN后,编译报错,无法解析的外部符号 _std_init_once_link_alternate_names_and_abort和_std_min_element_4 HOT 7
- mnncompress量化模型以后大小没有变化 HOT 4
- MNN.cv has the rotation issue in loading some photos from the phone HOT 2
- 求教 expr 中_Fill 的正确用法 HOT 4
- 鸿蒙系统编译失败 HOT 1
- Facing issue with version incapitability between protobuffer library and MNN library while performing mnnquant quantization process using python API of mnn HOT 5
- 多输入模型转换时添加--optimizePrefer 2选项 并基于此使用MNNPythonOfflineQuant离线量化,在arm cpu后端推理输出异常,与x86 cpu推理不对齐 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mnn.