Comments (5)
Hello @Perseus14. Do you using 0.2 version? The only 0.2 version is currently supported.
from pytorch2keras.
I had the same issue if running on CUDA.
If so, try
model = model.cpu()
from pytorch2keras.
Where to put model = model.cpu()
? (I have placed it after model = torchvision.models.resnet18()
) does it needed if I have cpu only notebook ?
I get an error:
TypeError: _jit_pass_onnx(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch::jit::Graph, arg1: torch._C._onnx.OperatorExportTypes) -> torch::jit::Graph
My env:
cat ~/.keras/keras.json
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_first"
}
torch.__version__ 0.4.1
keras.__version__ 2.2.2
tensorflow.__version__ 1.9.0
Full log:
/Users/myuser/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
torch.__version__ 0.4.1
keras.__version__ 2.2.2
tensorflow.__version__ 1.9.0
Traceback (most recent call last):
File "resnet18.py", line 25, in <module>
k_model = pytorch_to_keras(model, input_var, (3, 224, 224,), verbose=True)
File "/Users/myuser/anaconda3/lib/python3.6/site-packages/pytorch2keras/converter.py", line 98, in pytorch_to_keras
trace.set_graph(_optimize_graph(trace.graph(), False))
File "/Users/myuser/anaconda3/lib/python3.6/site-packages/pytorch2keras/converter.py", line 43, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, aten)
TypeError: _jit_pass_onnx(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch::jit::Graph, arg1: torch._C._onnx.OperatorExportTypes) -> torch::jit::Graph
Invoked with: graph(%0 : Float(1, 3, 224, 224)
%1 : Float(64, 3, 7, 7)
%2 : Float(64)
%3 : Float(64)
%4 : Float(64)
%5 : Float(64)
%6 : Long()
%7 : Float(64, 64, 3, 3)
%8 : Float(64)
%9 : Float(64)
%10 : Float(64)
%11 : Float(64)
%12 : Long()
%13 : Float(64, 64, 3, 3)
%14 : Float(64)
%15 : Float(64)
%16 : Float(64)
%17 : Float(64)
%18 : Long()
%19 : Float(64, 64, 3, 3)
%20 : Float(64)
%21 : Float(64)
%22 : Float(64)
%23 : Float(64)
%24 : Long()
%25 : Float(64, 64, 3, 3)
%26 : Float(64)
%27 : Float(64)
%28 : Float(64)
%29 : Float(64)
%30 : Long()
%31 : Float(128, 64, 3, 3)
%32 : Float(128)
%33 : Float(128)
%34 : Float(128)
%35 : Float(128)
%36 : Long()
%37 : Float(128, 128, 3, 3)
%38 : Float(128)
%39 : Float(128)
%40 : Float(128)
%41 : Float(128)
%42 : Long()
%43 : Float(128, 64, 1, 1)
%44 : Float(128)
%45 : Float(128)
%46 : Float(128)
%47 : Float(128)
%48 : Long()
%49 : Float(128, 128, 3, 3)
%50 : Float(128)
%51 : Float(128)
%52 : Float(128)
%53 : Float(128)
%54 : Long()
%55 : Float(128, 128, 3, 3)
%56 : Float(128)
%57 : Float(128)
%58 : Float(128)
%59 : Float(128)
%60 : Long()
%61 : Float(256, 128, 3, 3)
%62 : Float(256)
%63 : Float(256)
%64 : Float(256)
%65 : Float(256)
%66 : Long()
%67 : Float(256, 256, 3, 3)
%68 : Float(256)
%69 : Float(256)
%70 : Float(256)
%71 : Float(256)
%72 : Long()
%73 : Float(256, 128, 1, 1)
%74 : Float(256)
%75 : Float(256)
%76 : Float(256)
%77 : Float(256)
%78 : Long()
%79 : Float(256, 256, 3, 3)
%80 : Float(256)
%81 : Float(256)
%82 : Float(256)
%83 : Float(256)
%84 : Long()
%85 : Float(256, 256, 3, 3)
%86 : Float(256)
%87 : Float(256)
%88 : Float(256)
%89 : Float(256)
%90 : Long()
%91 : Float(512, 256, 3, 3)
%92 : Float(512)
%93 : Float(512)
%94 : Float(512)
%95 : Float(512)
%96 : Long()
%97 : Float(512, 512, 3, 3)
%98 : Float(512)
%99 : Float(512)
%100 : Float(512)
%101 : Float(512)
%102 : Long()
%103 : Float(512, 256, 1, 1)
%104 : Float(512)
%105 : Float(512)
%106 : Float(512)
%107 : Float(512)
%108 : Long()
%109 : Float(512, 512, 3, 3)
%110 : Float(512)
%111 : Float(512)
%112 : Float(512)
%113 : Float(512)
%114 : Long()
%115 : Float(512, 512, 3, 3)
%116 : Float(512)
%117 : Float(512)
%118 : Float(512)
%119 : Float(512)
%120 : Long()
%121 : Float(1000, 512)
%122 : Float(1000)) {
%123 : Dynamic = prim::Undefined(), scope: ResNet/Conv2d[conv1]
%132 : Float(1, 64, 112, 112) = aten::_convolution[stride=[2, 2], padding=[3, 3], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%0, %1, %123), scope: ResNet/Conv2d[conv1]
%137 : Float(1, 64, 112, 112) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%132, %2, %3, %4, %5), scope: ResNet/BatchNorm2d[bn1]
%139 : Float(1, 64, 112, 112) = aten::threshold[threshold={0}, value={0}](%137), scope: ResNet/ReLU[relu]
%142 : Float(1, 64, 56, 56), %143 : Long(1, 64, 56, 56) = aten::max_pool2d_with_indices[kernel_size=[3, 3], stride=[2, 2], padding=[1, 1], dilation=[1, 1], ceil_mode=0](%139), scope: ResNet/MaxPool2d[maxpool]
%144 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer1]/BasicBlock[0]/Conv2d[conv1]
%153 : Float(1, 64, 56, 56) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%142, %7, %144), scope: ResNet/Sequential[layer1]/BasicBlock[0]/Conv2d[conv1]
%158 : Float(1, 64, 56, 56) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%153, %8, %9, %10, %11), scope: ResNet/Sequential[layer1]/BasicBlock[0]/BatchNorm2d[bn1]
%160 : Float(1, 64, 56, 56) = aten::threshold[threshold={0}, value={0}](%158), scope: ResNet/Sequential[layer1]/BasicBlock[0]/ReLU[relu]
%161 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer1]/BasicBlock[0]/Conv2d[conv2]
%170 : Float(1, 64, 56, 56) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%160, %13, %161), scope: ResNet/Sequential[layer1]/BasicBlock[0]/Conv2d[conv2]
%175 : Float(1, 64, 56, 56) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%170, %14, %15, %16, %17), scope: ResNet/Sequential[layer1]/BasicBlock[0]/BatchNorm2d[bn2]
%176 : Float(1, 64, 56, 56) = aten::add[alpha={1}](%175, %142), scope: ResNet/Sequential[layer1]/BasicBlock[0]
%178 : Float(1, 64, 56, 56) = aten::threshold[threshold={0}, value={0}](%176), scope: ResNet/Sequential[layer1]/BasicBlock[0]/ReLU[relu]
%179 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer1]/BasicBlock[1]/Conv2d[conv1]
%188 : Float(1, 64, 56, 56) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%178, %19, %179), scope: ResNet/Sequential[layer1]/BasicBlock[1]/Conv2d[conv1]
%193 : Float(1, 64, 56, 56) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%188, %20, %21, %22, %23), scope: ResNet/Sequential[layer1]/BasicBlock[1]/BatchNorm2d[bn1]
%195 : Float(1, 64, 56, 56) = aten::threshold[threshold={0}, value={0}](%193), scope: ResNet/Sequential[layer1]/BasicBlock[1]/ReLU[relu]
%196 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer1]/BasicBlock[1]/Conv2d[conv2]
%205 : Float(1, 64, 56, 56) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%195, %25, %196), scope: ResNet/Sequential[layer1]/BasicBlock[1]/Conv2d[conv2]
%210 : Float(1, 64, 56, 56) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%205, %26, %27, %28, %29), scope: ResNet/Sequential[layer1]/BasicBlock[1]/BatchNorm2d[bn2]
%211 : Float(1, 64, 56, 56) = aten::add[alpha={1}](%210, %178), scope: ResNet/Sequential[layer1]/BasicBlock[1]
%213 : Float(1, 64, 56, 56) = aten::threshold[threshold={0}, value={0}](%211), scope: ResNet/Sequential[layer1]/BasicBlock[1]/ReLU[relu]
%214 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Conv2d[conv1]
%223 : Float(1, 128, 28, 28) = aten::_convolution[stride=[2, 2], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%213, %31, %214), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Conv2d[conv1]
%228 : Float(1, 128, 28, 28) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%223, %32, %33, %34, %35), scope: ResNet/Sequential[layer2]/BasicBlock[0]/BatchNorm2d[bn1]
%230 : Float(1, 128, 28, 28) = aten::threshold[threshold={0}, value={0}](%228), scope: ResNet/Sequential[layer2]/BasicBlock[0]/ReLU[relu]
%231 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Conv2d[conv2]
%240 : Float(1, 128, 28, 28) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%230, %37, %231), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Conv2d[conv2]
%245 : Float(1, 128, 28, 28) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%240, %38, %39, %40, %41), scope: ResNet/Sequential[layer2]/BasicBlock[0]/BatchNorm2d[bn2]
%246 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%255 : Float(1, 128, 28, 28) = aten::_convolution[stride=[2, 2], padding=[0, 0], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%213, %43, %246), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%260 : Float(1, 128, 28, 28) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%255, %44, %45, %46, %47), scope: ResNet/Sequential[layer2]/BasicBlock[0]/Sequential[downsample]/BatchNorm2d[1]
%261 : Float(1, 128, 28, 28) = aten::add[alpha={1}](%245, %260), scope: ResNet/Sequential[layer2]/BasicBlock[0]
%263 : Float(1, 128, 28, 28) = aten::threshold[threshold={0}, value={0}](%261), scope: ResNet/Sequential[layer2]/BasicBlock[0]/ReLU[relu]
%264 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer2]/BasicBlock[1]/Conv2d[conv1]
%273 : Float(1, 128, 28, 28) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%263, %49, %264), scope: ResNet/Sequential[layer2]/BasicBlock[1]/Conv2d[conv1]
%278 : Float(1, 128, 28, 28) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%273, %50, %51, %52, %53), scope: ResNet/Sequential[layer2]/BasicBlock[1]/BatchNorm2d[bn1]
%280 : Float(1, 128, 28, 28) = aten::threshold[threshold={0}, value={0}](%278), scope: ResNet/Sequential[layer2]/BasicBlock[1]/ReLU[relu]
%281 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer2]/BasicBlock[1]/Conv2d[conv2]
%290 : Float(1, 128, 28, 28) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%280, %55, %281), scope: ResNet/Sequential[layer2]/BasicBlock[1]/Conv2d[conv2]
%295 : Float(1, 128, 28, 28) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%290, %56, %57, %58, %59), scope: ResNet/Sequential[layer2]/BasicBlock[1]/BatchNorm2d[bn2]
%296 : Float(1, 128, 28, 28) = aten::add[alpha={1}](%295, %263), scope: ResNet/Sequential[layer2]/BasicBlock[1]
%298 : Float(1, 128, 28, 28) = aten::threshold[threshold={0}, value={0}](%296), scope: ResNet/Sequential[layer2]/BasicBlock[1]/ReLU[relu]
%299 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Conv2d[conv1]
%308 : Float(1, 256, 14, 14) = aten::_convolution[stride=[2, 2], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%298, %61, %299), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Conv2d[conv1]
%313 : Float(1, 256, 14, 14) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%308, %62, %63, %64, %65), scope: ResNet/Sequential[layer3]/BasicBlock[0]/BatchNorm2d[bn1]
%315 : Float(1, 256, 14, 14) = aten::threshold[threshold={0}, value={0}](%313), scope: ResNet/Sequential[layer3]/BasicBlock[0]/ReLU[relu]
%316 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Conv2d[conv2]
%325 : Float(1, 256, 14, 14) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%315, %67, %316), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Conv2d[conv2]
%330 : Float(1, 256, 14, 14) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%325, %68, %69, %70, %71), scope: ResNet/Sequential[layer3]/BasicBlock[0]/BatchNorm2d[bn2]
%331 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%340 : Float(1, 256, 14, 14) = aten::_convolution[stride=[2, 2], padding=[0, 0], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%298, %73, %331), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%345 : Float(1, 256, 14, 14) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%340, %74, %75, %76, %77), scope: ResNet/Sequential[layer3]/BasicBlock[0]/Sequential[downsample]/BatchNorm2d[1]
%346 : Float(1, 256, 14, 14) = aten::add[alpha={1}](%330, %345), scope: ResNet/Sequential[layer3]/BasicBlock[0]
%348 : Float(1, 256, 14, 14) = aten::threshold[threshold={0}, value={0}](%346), scope: ResNet/Sequential[layer3]/BasicBlock[0]/ReLU[relu]
%349 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer3]/BasicBlock[1]/Conv2d[conv1]
%358 : Float(1, 256, 14, 14) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%348, %79, %349), scope: ResNet/Sequential[layer3]/BasicBlock[1]/Conv2d[conv1]
%363 : Float(1, 256, 14, 14) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%358, %80, %81, %82, %83), scope: ResNet/Sequential[layer3]/BasicBlock[1]/BatchNorm2d[bn1]
%365 : Float(1, 256, 14, 14) = aten::threshold[threshold={0}, value={0}](%363), scope: ResNet/Sequential[layer3]/BasicBlock[1]/ReLU[relu]
%366 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer3]/BasicBlock[1]/Conv2d[conv2]
%375 : Float(1, 256, 14, 14) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%365, %85, %366), scope: ResNet/Sequential[layer3]/BasicBlock[1]/Conv2d[conv2]
%380 : Float(1, 256, 14, 14) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%375, %86, %87, %88, %89), scope: ResNet/Sequential[layer3]/BasicBlock[1]/BatchNorm2d[bn2]
%381 : Float(1, 256, 14, 14) = aten::add[alpha={1}](%380, %348), scope: ResNet/Sequential[layer3]/BasicBlock[1]
%383 : Float(1, 256, 14, 14) = aten::threshold[threshold={0}, value={0}](%381), scope: ResNet/Sequential[layer3]/BasicBlock[1]/ReLU[relu]
%384 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Conv2d[conv1]
%393 : Float(1, 512, 7, 7) = aten::_convolution[stride=[2, 2], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%383, %91, %384), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Conv2d[conv1]
%398 : Float(1, 512, 7, 7) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%393, %92, %93, %94, %95), scope: ResNet/Sequential[layer4]/BasicBlock[0]/BatchNorm2d[bn1]
%400 : Float(1, 512, 7, 7) = aten::threshold[threshold={0}, value={0}](%398), scope: ResNet/Sequential[layer4]/BasicBlock[0]/ReLU[relu]
%401 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Conv2d[conv2]
%410 : Float(1, 512, 7, 7) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%400, %97, %401), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Conv2d[conv2]
%415 : Float(1, 512, 7, 7) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%410, %98, %99, %100, %101), scope: ResNet/Sequential[layer4]/BasicBlock[0]/BatchNorm2d[bn2]
%416 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%425 : Float(1, 512, 7, 7) = aten::_convolution[stride=[2, 2], padding=[0, 0], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%383, %103, %416), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Sequential[downsample]/Conv2d[0]
%430 : Float(1, 512, 7, 7) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%425, %104, %105, %106, %107), scope: ResNet/Sequential[layer4]/BasicBlock[0]/Sequential[downsample]/BatchNorm2d[1]
%431 : Float(1, 512, 7, 7) = aten::add[alpha={1}](%415, %430), scope: ResNet/Sequential[layer4]/BasicBlock[0]
%433 : Float(1, 512, 7, 7) = aten::threshold[threshold={0}, value={0}](%431), scope: ResNet/Sequential[layer4]/BasicBlock[0]/ReLU[relu]
%434 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer4]/BasicBlock[1]/Conv2d[conv1]
%443 : Float(1, 512, 7, 7) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%433, %109, %434), scope: ResNet/Sequential[layer4]/BasicBlock[1]/Conv2d[conv1]
%448 : Float(1, 512, 7, 7) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%443, %110, %111, %112, %113), scope: ResNet/Sequential[layer4]/BasicBlock[1]/BatchNorm2d[bn1]
%450 : Float(1, 512, 7, 7) = aten::threshold[threshold={0}, value={0}](%448), scope: ResNet/Sequential[layer4]/BasicBlock[1]/ReLU[relu]
%451 : Dynamic = prim::Undefined(), scope: ResNet/Sequential[layer4]/BasicBlock[1]/Conv2d[conv2]
%460 : Float(1, 512, 7, 7) = aten::_convolution[stride=[1, 1], padding=[1, 1], dilation=[1, 1], transposed=0, output_padding=[0, 0], groups=1, benchmark=0, deterministic=0, cudnn_enabled=1](%450, %115, %451), scope: ResNet/Sequential[layer4]/BasicBlock[1]/Conv2d[conv2]
%465 : Float(1, 512, 7, 7) = aten::batch_norm[training=0, momentum=0, eps=1e-05, cudnn_enabled=1](%460, %116, %117, %118, %119), scope: ResNet/Sequential[layer4]/BasicBlock[1]/BatchNorm2d[bn2]
%466 : Float(1, 512, 7, 7) = aten::add[alpha={1}](%465, %433), scope: ResNet/Sequential[layer4]/BasicBlock[1]
%468 : Float(1, 512, 7, 7) = aten::threshold[threshold={0}, value={0}](%466), scope: ResNet/Sequential[layer4]/BasicBlock[1]/ReLU[relu]
%470 : Float(1, 512, 1, 1) = aten::avg_pool2d[kernel_size=[7, 7], stride=[1, 1], padding=[0, 0], ceil_mode=0, count_include_pad=1](%468), scope: ResNet/AvgPool2d[avgpool]
%471 : Long() = aten::size[dim=0](%470), scope: ResNet
%472 : Long() = prim::Constant[value={-1}](), scope: ResNet
%473 : Dynamic = aten::stack[dim=0](%471, %472), scope: ResNet
%474 : Float(1, 512) = aten::view(%470, %473), scope: ResNet
%475 : Float(512!, 1000!) = aten::t(%121), scope: ResNet/Linear[fc]
%476 : Float(1, 1000) = aten::expand[size=[1, 1000], implicit=1](%122), scope: ResNet/Linear[fc]
%477 : Float(1, 1000) = aten::addmm[beta={1}, alpha={1}](%476, %474, %475), scope: ResNet/Linear[fc]
return (%477);
}
, False
from pytorch2keras.
Also I have checked this onxx tutor : https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb
Model successfully exported:
from torch.autograd import Variable
import torch.onnx
import torchvision
# Export model to onnx format
dummy_input = Variable(torch.randn(1, 3, 224, 224))
model = torchvision.models.resnet18(pretrained=True)
torch.onnx.export(model, dummy_input, "resnet18.onnx")
But check not passed:
import onnx
model = onnx.load("resnet18.onnx")
onnx.checker.check_model(model)
print(onnx.helper.printable_graph(model.graph))
Traceback (most recent call last):
File "resnet18_onnx.py", line 13, in <module>
onnx.checker.check_model(model)
File "/Users/myuser/anaconda3/lib/python3.6/site-packages/onnx/checker.py", line 77, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Input index 3 must be set to consumed for operator BatchNormalization
==> Context: Bad node spec: input: "123" input: "2" input: "3" input: "4" input: "5" output: "124" op_type: "BatchNormalization" attribute { name: "epsilon" f: 1e-05 type: FLOAT } attribute { name: "is_test" i: 1 type: INT } attribute { name: "momentum" f: 1 type: FLOAT } doc_string: "/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py(1254): batch_norm\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py(66): forward\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py(465): _slow_forward\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py(475): __call__\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/resnet.py(140): forward\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py(465): _slow_forward\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py(475): __call__\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py(109): forward\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py(477): __call__\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py(77): get_trace_graph\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py(144): _trace_and_get_graph_from_model\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py(177): _model_to_graph\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py(226): _export\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py(94): export\n/Users/myuser/anaconda3/lib/python3.6/site-packages/torch/onnx/__init__.py(26): export\nresnet18_onnx.py(7): <module>\n"
from pytorch2keras.
If anyone wanna convert ResNet18 from PyTorch to Tensorflow.
https://github.com/CR-Ko/Pytorch2TF
This might help.
from pytorch2keras.
Related Issues (20)
- Type error while converting PyTorch model - Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'
- Converted keras model has different parameters HOT 1
- ValueError: Operands could not be broadcast together with shapes , when converting with change_ordering HOT 1
- pip install
- "image_data_format": "channels_first", HOT 1
- concatenate axis not getting changed when switching from NCHW mode to NHWC mode
- onnx.optimizer does not exist anymore HOT 2
- Batch Normalization Layers not present in the converted Keras model HOT 1
- AttributeError: 'tuple' object has no attribute 'ndims'
- adding new converters functions for onnx2keras
- cannot import name 'optimizer' from 'onnx' HOT 5
- converts torch model with dtype double (float64) to keras with float32 HOT 1
- TypeError: export() got an unexpected keyword argument 'do_constant_folding'
- Convert NCHW to NHWC
- How to import pytorch 2 keras? HOT 1
- question
- ValueError: 'onnx::Ma/' is not a valid root scope name. HOT 4
- AttributeError: Number of inputs is not equal 1 for unsqueeze layer
- Model's Accuracy Drops A Lot after converting
- padding bug HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch2keras.