Comments (16)
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss.
I know what you mean. I write two files named "LayerNormalization.h" and "LayerNormalization.cpp", and modified src/CMakeLists.txt with ncnn_add_layer(LayerNormalization), then compile it again. But it doesn't seem to work.
yeah, i got the same situation, but dont know why it didn't work
Help, please. @nihui
from ncnn.
I used PNNX to resolve my trouble in the end!
Thanks @nihui for PNNX!
from ncnn.
What is the result when transfering the model into .param & .bin. Some op not support? I check the output from different output layer and find it prints "NAN" after some middle layers, but I cant locate it. So maybe unsupport op exist, can you upload the original model file (be like .onnx) so I can check the model structure further.
from ncnn.
What is the result when transfering the model into .param & .bin. Some op not support? I check the output from different output layer and find it prints "NAN" after some middle layers, but I cant locate it. So maybe unsupport op exist, can you upload the original model file (be like .onnx) so I can check the model structure further.
Thank you for your reply!
I have found the reason why the model outputs nan. The original author implemented a custom LayerNorm operation. This operation can be implemented in Pytorch:
class LayerNorm2d_Sc(nn.Module):
""" 作者实现的自定义LayerNorm,理论上Pytorch通过调整维度是能做到的,我也验证了这一点,但是ncnn中暂无法实现 """
def __init__(self, channels, eps=1e-6):
super(LayerNorm2d_Sc, self).__init__()
self.register_parameter('weight', nn.Parameter(torch.ones(channels)))
self.register_parameter('bias', nn.Parameter(torch.zeros(channels)))
self.eps = eps
self.torch_layernorm = torch.nn.LayerNorm(channels, eps=eps, elementwise_affine=False)
def forward(self, x):
# 我尝试使用Pytorch的LayerNorm替换,Pytorch代码和导出的onnx均可以得到正常的结果,但转ncnn失败
# C = x.shape[1]
# x_ = x.clone()
# x_ = x_.permute(0, 2, 3, 1)
# y = self.torch_layernorm(x_)
# y = y.permute(0, 3, 1, 2)
# # y = self.weight.view(1, C, 1, 1) * y + self.bias.view(1, C, 1, 1)
# return y
# 原作者实现的自定义LayerNorm。Pytorch和导出的onnx均可以得到正常结果,但转ncnn后推理得到全黑的图像
C = x.shape[1]
x_ = x.clone()
mu = x_.mean(dim=1, keepdim=True)
var = (x_ - mu).pow(2).mean(dim=1, keepdim=True)
y = (x_ - mu) / (var + self.eps).sqrt()
y = self.weight.view(1, C, 1, 1) * y + self.bias.view(1, C, 1, 1)
return y
I tried using numpy instead of Pytorch. The inference result was not completely black, but it was not normal either.
I saw in ncnn's wiki that the implementation layer can be customized, and I am trying to add the author's custom LayerNorm (if I understand correctly, the dimension processed by the ncnn model in C++ is WHC, and the output is also WHC. But in Python, ncnn output seems to be CHW. At least I can get normal results by CHW. Of course, I am more concerned about the results in C++.)
from ncnn.
Hello!
1、but in my practice,the dimension processed by the ncnn model in C++ is also CDHW, and the output is also CDHW. See the code in C++ to flatten the output below. It means [Batch,Channel,Height,Width]. So,
void pretty_print(const ncnn::Mat &m, std::vector<float> &vec_heap) {
for (int q = 0; q < m.c; q++) {
const float *ptr = m.channel(q);
for (int z = 0; z < m.d; z++) {
for (int y = 0; y < m.h; y++) {
for (int x = 0; x < m.w; x++) {
vec_heap.emplace_back(ptr[x]);
}
ptr += m.w;
}
}
}
}
2、Your own LayerNorm2d_Sc works the same with the original one. If your own LayerNorm2d_Sc works but fails in transfering to ncnn model. Maybe you can update the ncnn version and compile the layernorm operation (see #5262 (comment) for detail). Could you post the error message?
from ncnn.
And for 转ncnn后推理得到全黑的图像
, maybe u need to re-normalize the output to [0,256] and get the final output.
from ncnn.
What is the result when transfering the model into .param & .bin. Some op not support? I check the output from different output layer and find it prints "NAN" after some middle layers, but I cant locate it. So maybe unsupport op exist, can you upload the original model file (be like .onnx) so I can check the model structure further.
Here is the onnx from Pytorch w/o onnxsim.
model_trace_1.4M_512.onnx.zip
from ncnn.
Hello! 1、but in my practice,the dimension processed by the ncnn model in C++ is also CDHW, and the output is also CDHW. See the code in C++ to flatten the output below. It means [Batch,Channel,Height,Width]. So,
void pretty_print(const ncnn::Mat &m, std::vector<float> &vec_heap) { for (int q = 0; q < m.c; q++) { const float *ptr = m.channel(q); for (int z = 0; z < m.d; z++) { for (int y = 0; y < m.h; y++) { for (int x = 0; x < m.w; x++) { vec_heap.emplace_back(ptr[x]); } ptr += m.w; } } } }
2、Your own LayerNorm2d_Sc works the same with the original one. If your own LayerNorm2d_Sc works but fails in transfering to ncnn model. Maybe you can update the ncnn version and compile the layernorm operation (see #5262 (comment) for detail). Could you post the error message?
I do these operations for getting ncnn: Pytorch model --> onnxsim --> ncnn. But I got "LayerNormalization not supported yet!" when turning it to ncnn
./onnx2ncnn model_trace_1.4M_512_sim.onnx test_ncnn.param test_ncnn.bin
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
LayerNormalization not supported yet!
# axis=-1
# epsilon=1e-06
The number of errors reported may correspond to the number of custom LayerNorm operations.
In addition, I try to add LayerNorm in ncnn to implement the following:
// modified in src/layer/layernorm.cpp
else if (affine_size == channels)
{
#pragma omp parallel for num_threads(opt.num_threads)
for (int i = 0; i < size; i++)
{
// mean
float sum = 0.f;
for (int q = 0; q < channels; q++)
{
sum += bottom_top_blob.channel(q)[i];
}
float mean = sum / channels;
// var
float sqsum = 0.f;
float tmp = 0.f;
for (int q = 0; q < channels; q++)
{
tmp = bottom_top_blob.channel(q)[i] - mean;
sqsum += tmp * tmp;
}
float var = sqsum / channels;
float a = 1.f / (sqrtf(var + eps));
float b = -mean * a;
for (int q = 0; q < channels; i++)
{
bottom_top_blob.channel(q)[i] = bottom_top_blob.channel(q)[i] * a + b;
}
}
}
And execute the command under ncnn/build:
cmake ..
make -j64
make install```
When I turned onnx-sim file to ncnn, I got the same error above.
Thanks again for your reply, and I believe I can figure ncnn out with your help.^_^
from ncnn.
Haha I got "LayerNormalization not supported yet!" when turning it to ncnn too.
from ncnn.
Haha I got "LayerNormalization not supported yet!" when turning it to ncnn too.
I added the LayerNorm implementation of ncnn, why is it still not supported? It feels like the conversion process does not call ncnn’s LayerNorm.
from ncnn.
Haha I got "LayerNormalization not supported yet!" when turning it to ncnn too.
I added the LayerNorm implementation of ncnn, why is it still not supported? It feels like the conversion process does not call ncnn’s LayerNorm.
1、I didn't try to register own op, but i think it should be a individual .h & .cpp file to declare the class LayerNormalization
.
and then in /ncnn/src/CMakeLists.txt line 169 add ncnn_add_layer(LayerNormalization)
from ncnn.
Haha I got "LayerNormalization not supported yet!" when turning it to ncnn too.
I added the LayerNorm implementation of ncnn, why is it still not supported? It feels like the conversion process does not call ncnn’s LayerNorm.
1、I didn't try to register own op, but i think it should be a individual .h & .cpp file to declare the class
LayerNormalization
. and then in /ncnn/src/CMakeLists.txt line 169 addncnn_add_layer(LayerNormalization)
I have tried to supplement the LayerNorm implementation in ncnn, added the LayerNormalization implementation according to the reference document add custom layer and recompiled.
When onnx is converted to ncnn, an error is still reported and the LayerNormalization operation is not supported.
Did I compile it incorrectly? (The compilation process prompts "Could NOT find protobuf (missing: protobuf_DIR)", but subsequent execution of make, etc. can also succeed)
1.LayerNorm in ncnn surpports normalization by channel dim:
2.Added new LayerNormalization implementation in ncnn, but it doesn't seem to work.
from ncnn.
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss.
from ncnn.
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss.
I know what you mean. I write two files named "LayerNormalization.h" and "LayerNormalization.cpp", and modified src/CMakeLists.txt with ncnn_add_layer(LayerNormalization), then compile it again. But it doesn't seem to work.
from ncnn.
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss.
I know what you mean. I write two files named "LayerNormalization.h" and "LayerNormalization.cpp", and modified src/CMakeLists.txt with ncnn_add_layer(LayerNormalization), then compile it again. But it doesn't seem to work.
yeah, i got the same situation, but dont know why it didn't work
from ncnn.
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss.
I know what you mean. I write two files named "LayerNormalization.h" and "LayerNormalization.cpp", and modified src/CMakeLists.txt with ncnn_add_layer(LayerNormalization), then compile it again. But it doesn't seem to work.
yeah, i got the same situation, but dont know why it didn't work
Thanks again, I won't give up and solve this problem sooner or later. I must turn to ncnn, as it's perfect in my view.
from ncnn.
Related Issues (20)
- Can simplestl use in RTOS ? HOT 1
- [ncnn-android-yolov8] How to handle real-time detect when the view set orientation to "landscape" ? HOT 1
- pnnx能正常转换模型,模型输出与onnx模型输出不一致 HOT 4
- EfficientPhys onnx转ncnn模型转换报错 HOT 3
- 用自己的yolov8模型 转成ncnn 在windows下部署后,接口也对上了,但是结果却出不来,不尽人意
- 是否能够单独使用矩阵乘法的API? HOT 1
- 转换模型报错,如何定位
- how do I get the fossilize file .foz out of the vulkan driver? HOT 2
- 龙芯教育派2k1000中运行报错浮点数例外
- a minor issue in prebuild ncnn-android libs
- windows下用clang编译引起crash HOT 2
- paddleocr转换ncnn求助,尝试两条路径无果。 HOT 2
- 鸿蒙平台开启vulkan编译ncnn HOT 3
- What is the iou threshold of the models from the model zoo?
- Is WebGPU will be added as backend
- VulkanDevice::init_device_extension() 存在崩溃可能性
- Conv层结果不一致 HOT 2
- 3090 上使用vulkan-sdk 1.3.204.1~rc1-1lunarg18.04-1 cmake 添加-DNCNN_BUILD_TOOLS=ON 后未能编译成功 HOT 3
- [pnnx]:torch.tensordot convert failed
- Resource in the example is not available any more
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ncnn.