Giter VIP home page Giter VIP logo

pytorchtocaffe's Introduction

Since the pytorch models we use are basically dynamic graph structures, the problem with dynamic graphs is that the graph structure cannot be determined once the forward is incomplete, and caffe is a static graph framework, which will cause the model conversion from pytorch to caffe to encounter many problems, and the pytorch version iteration is very fast,so this repo will no longer be recommended.If you want to convert pytorch to caffe,we suggest using pytorch->onnx->caffe by this repo https://github.com/xxradon/ONNXToCaffe.

The code is mainly come from nn_tools.Thanks for hahnyuan's contribution.

Neural Network Tools: Converter and Analyser

Providing a tool for neural network frameworks for pytorch and caffe.

The nn_tools is released under the MIT License (refer to the LICENSE file for details).

features

  1. Converting a pytorch model to caffe model.
  2. Some convenient tools of manipulate caffemodel and prototxt quickly(like get or set weights of layers).
  3. Support pytorch version >= 0.2.(Have tested on 0.3,0.3.1, 0.4, 0.4.1 ,1.0, 1.2)
  4. Analysing a model, get the operations number(ops) in every layers.

Noting: pytorch version 1.1 is not supported now

requirements

  • Python2.7 or Python3.x
  • Each functions in this tools requires corresponding neural network python package (pytorch and so on).

Analyser

The analyser can analyse all the model layers' [input_size, output_size, multiplication ops, addition ops, comparation ops, tot ops, weight size and so on] given a input tensor size, which is convenint for model deploy analyse.

Caffe

Before you analyse your network, Netscope is recommended to visiualize your network.

Command:python caffe_analyser.py [-h] prototxt outdir shape

  • The prototxt is the path of the prototxt file.
  • The outdir is path to save the csv file.
  • The shape is the input shape of the network(split by comma ,), in caffe image shape should be: batch_size, channel, image_height, image_width.

For example python caffe_analyser.py resnet_18_deploy.prototxt analys_result.csv 1,3,224,224

Pytorch

Supporting analyse the inheritors of torch.nn.Moudule class.

Command:pytorch_analyser.py [-h] [--out OUT] [--class_args ARGS] path name shape

  • The path is the python file path which contaning your class.
  • The name is the class name or instance name in your python file.
  • The shape is the input shape of the network(split by comma ,), in pytorch image shape should be: batch_size, channel, image_height, image_width.
  • The out (optinal) is path to save the csv file, default is '/tmp/pytorch_analyse.csv'.
  • The class_args (optional) is the args to init the class in python file, default is empty.

For example python pytorch_analyser.py example/resnet_pytorch_analysis_example.py resnet18 1,3,224,224

Converter

Pytorch to Caffe

The new version of pytorch_to_caffe supporting the newest version(from 0.2.0 to 1.2.0) of pytorch. NOTICE: The transfer output will be somewhat different with the original model, caused by implementation difference.

  • Supporting layers types: conv2d -> Convolution, _conv_transpose2d -> Deconvolution, _linear -> InnerProduct, _split -> Slice, max_pool2d,_avg_pool2d -> Pooling, _max -> Eltwise, _cat -> Concat, dropout -> Dropout, relu -> ReLU, prelu -> PReLU, _leaky_relu -> ReLU, _tanh -> TanH,
    threshold(only value=0) -> Threshold,ReLU, softmax -> Softmax, batch_norm -> BatchNorm,Scale, instance_norm -> BatchNorm,Scale, _interpolate -> Upsample _hardtanh -> ReLU6 _permute -> Permute _l2Norm -> Normalize

  • Supporting operations: torch.split, torch.max, torch.cat ,torch.sigmoid, torch.div

  • Supporting tensor Variable operations: var.view, + (add), += (iadd), -(sub), -=(isub) * (mul) *= (imul), torch.Tensor.contiguous(_contiguous), torch.Tensor.pow(_pow), * torch.Tensor.sum(_sum), torch.Tensor.sqrt(_sqrt), torch.Tensor.unsqueeze(_unsqueeze) * torch.Tensor.expand_as(_expand_as),

Need to be added for caffe in the future:

  • DepthwiseConv

The supported above can transfer many kinds of nets, such as AlexNet(tested), VGG(tested), ResNet(fixed the bug in origin repo which mainly caused by ReLu layer function.), Inception_V3(tested).

The supported layers concluded the most popular layers and operations. The other layer types will be added soon, you can ask me to add them in issues.

Example: please see file example/alexnet_pytorch_to_caffe.py. Just Run python3 example/alexnet_pytorch_to_caffe.py.

Attention: the main difference from convert model is the BN layer,you should pay more attention to the BN parameters like momentum=0.1, eps=1e-5.

Deploy verify(Very Important)

After Converter,we should use verify_deploy.py to verify the output of pytorch model and the convertted caffe model. If you want to verify the outputs of caffe and pytorch,you should make caffe and pytorch install in the same environment,anaconda is recommended. using following script,we can install caffe-gpu(master branch).

conda install caffe-gpu pytorch cudatoolkit=9.0 -c pytorch 

other way,we can use docker,and in https://github.com/ufoym/deepo,for cuda9

docker pull ufoym/deepo:all-py36-cu90

for cuda10

docker pull ufoym/deepo:all-py36-cu100

please see file example/verify_deploy.py,it can verify the output of pytorch model and the convertted caffe model in the same input.

Some common functions

funcs.py

  • get_iou(box_a, box_b) intersection over union of two boxes
  • nms(bboxs,scores,thresh) Non-maximum suppression
  • Logger print some str to a file and stdout with H M S

pytorchtocaffe's People

Contributors

guanmoyu avatar xxradon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorchtocaffe's Issues

None has type NoneType, but expected one of: bytes, unicode

convert from pytorch to caffe with own model,

WARNING: CANNOT FOUND blob 140437991457808
Traceback (most recent call last):
File "conver_rgb.py", line 49, in
convert(args)
File "conver_rgb.py", line 42, in convert
pytorch_to_caffe.trans_net(net, input_var, name)
File "/home/wangwenpeng/work/FaceAntiSpoofing_readsense_pytorch_patch/pytorch2caffe_MGN/convert/pytorch_to_caffe.py", line 459, in trans_net
out = net.forward(input_var)
File "convert/patch_attention_net.py", line 58, in forward
logit = logit.view(batch_size, -1)
File "/home/wangwenpeng/work/FaceAntiSpoofing_readsense_pytorch_patch/pytorch2caffe_MGN/convert/pytorch_to_caffe.py", line 297, in _view
bottom=[log.blobs(input)],top=top_blobs)
File "/home/wangwenpeng/work/FaceAntiSpoofing_readsense_pytorch_patch/pytorch2caffe_MGN/Caffe/layer_param.py", line 33, in init
self.bottom.extend(bottom)
TypeError: None has type NoneType, but expected one of: bytes, unicode

Anyone help? seems input blob error, but i show input shape ok before NET_INITTED=True in trans_net(), and error after NET_INITTED=True

转换vgg11时出错

作者您好,我执行python example/vgg19_pytorch_to_caffe.py转换VGG11到caffe时出错了,请问如何解决?
我的版本是cuda9.0 python3.6 pytorch 1.1.0 caffe-gpu1.0
输出结果如下:

Starting Transform, This will take a while
139790540402336:blob1 was added to blobs
Add blob blob1 : torch.Size([1, 3, 224, 224])
139790540402336:blob1 getting
torch ops name: {VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(8): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): ReLU(inplace)
(11): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(12): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(13): ReLU(inplace)
(14): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(15): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(16): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): ReLU(inplace)
(18): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(19): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(20): ReLU(inplace)
(21): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(22): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(23): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(24): ReLU(inplace)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(27): ReLU(inplace)
(28): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
): '', Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(8): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): ReLU(inplace)
(11): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(12): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(13): ReLU(inplace)
(14): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(15): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(16): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): ReLU(inplace)
(18): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(19): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(20): ReLU(inplace)
(21): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(22): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(23): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(24): ReLU(inplace)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(27): ReLU(inplace)
(28): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
): 'features', Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.0', BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.1', ReLU(inplace): 'features.2', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False): 'features.3', Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.4', BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.5', ReLU(inplace): 'features.6', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False): 'features.7', Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.8', BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.9', ReLU(inplace): 'features.10', Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.11', BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.12', ReLU(inplace): 'features.13', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False): 'features.14', Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.15', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.16', ReLU(inplace): 'features.17', Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.18', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.19', ReLU(inplace): 'features.20', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False): 'features.21', Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.22', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.23', ReLU(inplace): 'features.24', Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)): 'features.25', BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True): 'features.26', ReLU(inplace): 'features.27', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False): 'features.28', AdaptiveAvgPool2d(output_size=(7, 7)): 'avgpool', Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
): 'classifier', Linear(in_features=25088, out_features=4096, bias=True): 'classifier.0', ReLU(inplace): 'classifier.1', Dropout(p=0.5): 'classifier.2', Linear(in_features=4096, out_features=4096, bias=True): 'classifier.3', ReLU(inplace): 'classifier.4', Dropout(p=0.5): 'classifier.5', Linear(in_features=4096, out_features=1000, bias=True): 'classifier.6'}
features.0
conv1 was added to layers
139790540402264:conv_blob1 was added to blobs
Add blob conv_blob1 : torch.Size([1, 64, 224, 224])
139790540402336:blob1 getting
139790540402264:conv_blob1 getting
add1 was added to layers
139790540402480:add_blob1 was added to blobs
Add blob add_blob1 : torch.Size([])
Traceback (most recent call last):
File "example/vgg19_pytorch_to_caffe.py", line 11, in
pytorch_to_caffe.trans_net(net,input,name)
File "./pytorch_to_caffe.py", line 654, in trans_net
out = net.forward(input_var)
File "/home/xs/anaconda3/envs/caffe/lib/python3.6/site-packages/torchvision/models/vgg.py", line 42, in forward
x = self.features(x)
File "/home/xs/anaconda3/envs/caffe/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/xs/anaconda3/envs/caffe/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/xs/anaconda3/envs/caffe/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/xs/anaconda3/envs/caffe/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 74, in forward
self.num_batches_tracked += 1
File "./pytorch_to_caffe.py", line 496, in _iadd
bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
File "./pytorch_to_caffe.py", line 89, in blobs
print("{}:{} getting".format(var, self._blobs[var]))
File "./pytorch_to_caffe.py", line 32, in getitem
return self.data[key]
KeyError: 139791099045136

'module' object has no attribute 'walk_stack'

IHi, thanks for your nice work.
I run the example of resnet_pytorch_2_caffe.py
I downpload the resnet18 from resnet18 : 'https://download.pytorch.org/models/resnet18-5c106cde.pth.
I get the following error. Could you help me find the reason. Thank you very much.
Traceback (most recent call last):
File "example/resnet_pytorch_2_caffe.py", line 17, in
pytorch_to_caffe.trans_net(resnet18,input,name)
File "./pytorch_to_caffe.py", line 613, in trans_net
out = net.forward(input_var)
File "/home/scr/anaconda2/envs/pytorchenv/lib/python2.7/site-packages/torchvision/models/resnet.py", line 192, in forward
x = self.conv1(x)
File "/home/scr/anaconda2/envs/pytorchenv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/scr/anaconda2/envs/pytorchenv/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
File "./pytorch_to_caffe.py", line 528, in call
for stack in traceback.walk_stack(None):
AttributeError: 'module' object has no attribute 'walk_stack'

for stack in traceback.walk_stack(None):
AttributeError: 'module' object has no attribute 'walk_stack'

Verification Error On Resnet18

This project helps a lot. However, I found a problem converting resnet18.

During the transformation, I got a warning message:

WARNING: the output shape miss match at max_pool1: input torch.Size([1, 64, 112, 112]) output---Pytorch:torch.Size([1, 64, 56, 56])---Caffe:torch.Size([1, 64, 57, 57])
This is caused by the different implementation that ceil mode in caffe and the floor mode in pytorch.
You can add the clip layer in caffe prototxt manually if shape mismatch error is caused in caffe. 

And the above warning leads to an error in verification:

F1225 20:18:19.848548 2458284928 net.cpp:757] Cannot copy param 0 weights from layer 'fc1'; shape mismatch.  Source param shape is 1000 512 (512000); target param shape is 1000 2048 (2048000). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.
*** Check failure stack trace: ***

I need to manually design a crop layer to correct the size match, but I do not know how to do it. Caffe's crop layer requires two input layers.

There is a useless crop layer below:

layer {
  name: "max_pool1"
  type: "Pooling"
  bottom: "relu_blob1"
  top: "max_pool_blob1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
    pad: 1
  }
}

layer {
  name: "crop_layer"
  type: "Crop"
  bottom: "max_pool_blob1"
  bottom: "max_pool_blob1"
  top: "cropped_blob1"
  crop_param {
    axis: 2
    offset: 0
    offset: 0
  }
}

layer {
  name: "conv2"
  type: "Convolution"
  bottom: "cropped_blob1"
  top: "conv_blob2"
  convolution_param {
    num_output: 64
    bias_term: false
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    dilation: 1
  }
}

Any help is appreciated!

Unsupported operation: x[:, :, 1:, 1:]

@xxradon It gives error in this operation: x = self.conv(x[:, :, 1:, 1:]). The operation x[:, :, 1:, 1:] divides the area of the feature map, which is somewhat similar to slice, but slice is for the channel dimension.
I don't know if there is a corresponding operation in caffe, can you give me some advice
thank you

Support for custom layers

Hi,
Thanks for the great utility!
Can someone give any pointers in how to go about converting a PyTorch model to Caffe with custom layers?
I have a custom layer as the last layer. I would like to remove it (like Keras pop) before converting to Caffe.

Thanks.

Errors on python2.7

/PytorchToCaffe/analysis/layers.py", line 174
    self.out = Blob([self.batch_size, num_out, *outs], self)
                                                ^
SyntaxError: invalid syntax

讀不到 batchnrom

我轉 resnet 是正常的,但轉這個範例就不行...

import torch
import torch.nn as nn

class Conv_block(torch.nn.Module):
    def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):
        super(Conv_block, self).__init__()
        self.conv_kk = torch.nn.Conv2d(in_c, out_channels=out_c, kernel_size=kernel, groups=groups, stride=stride, padding=padding, bias=False)
        self.bn_kk = nn.BatchNorm2d(out_c)
        #  self.prelu = torch.nn.PReLU(out_c)
    def forward(self, x):
        x = self.conv_kk(x)
        x = self.bn_kk(x)
        #  x = self.prelu(x)
        return x

class MobileFaceNet(torch.nn.Module):
    def __init__(self, embedding_size, net_type='GNAP'):
        super(MobileFaceNet, self).__init__()

        self.base_modules = nn.Sequential(Conv_block(3, 64, kernel=(3, 3), stride=(2, 2), padding=(1, 1)))

    def forward(self, x):

        x = self.base_modules(x)

        return x

貌似在這邊沒辦法抓到

    def __call__(self,*args,**kwargs):
        if not NET_INITTED:
            return self.raw(*args,**kwargs)
        for stack in traceback.walk_stack(None):
            if 'self' in stack[0].f_locals:
                layer=stack[0].f_locals['self']
                if layer in layer_names:
                    log.pytorch_layer_name=layer_names[layer]
                    print('layer {}'.format(log.pytorch_layer_name)) # 這邊不會印 bn
                    break
        out=self.obj(self.raw,*args,**kwargs)
        # if isinstance(out,Variable):
        #     out=[out]
        return out

traceback

Traceback (most recent call last):
  File "example/mobilenet_pytorch_to_caffe.py", line 14, in <module>
    pytorch_to_caffe.trans_net(net, inp, name)
  File "./pytorch_to_caffe.py", line 616, in trans_net
    out = net.forward(input_var)
  File "./model/mobilenet.py", line 38, in forward
    x = self.base_modules(x)
  File "/home/tumh/.pyenv/versions/pytorch-python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/tumh/.pyenv/versions/pytorch-python3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/tumh/.pyenv/versions/pytorch-python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "./model/mobilenet.py", line 21, in forward
    x = self.bn_kk(x)
  File "/home/tumh/.pyenv/versions/pytorch-python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/tumh/.pyenv/versions/pytorch-python3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 67, in forward
    self.num_batches_tracked += 1
  File "./pytorch_to_caffe.py", line 459, in _iadd
    bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
  File "./pytorch_to_caffe.py", line 89, in blobs
    print("{}:{} getting".format(var, self._blobs[var]))
  File "./pytorch_to_caffe.py", line 32, in __getitem__
    return self.data[key]
KeyError: 139891738811560

是需要什麼特別的寫法才能過?

无法转换 Avgpool2d

不论在网络结构中使用mean还是avgpool2d函数,转换后都会转为caffe中的'Reduction'层,但我后期的部署中不支持这种层。不过PytorchToCaffe的源码中好像有转为caffe中poo层的代码,请问我应该如何在pytorch中应用avgpool功能?我的pytorch版本是1.2.0

请问如何支持F.relu6的转换

博主您好,我尝试过添加代码:

def _relu6(raw, input, inplace=False):
    # for F.relu6
    x = raw(input, inplace)
    bottom_blobs=[log.blobs(input)]
    name = log.add_layer(name='relu6')
    top_blobs=log.add_blobs([x],name=bottom_blobs[0],with_num=False)
    layer = caffe_net.Layer_param(name=name, type='ReLU6',
                                  bottom=bottom_blobs,top=top_blobs)
    log.cnet.add_layer(layer)
    return x

或者

def _relu6(raw, input, inplace=False):
    # for threshold or prelu
    x = raw(input, False)
    name = log.add_layer(name='relu6')
    log.add_blobs([x], name='relu6_blob')
    layer = caffe_net.Layer_param(name=name, type='ReLU6',
                                  bottom=[log.blobs(input)], top=[log.blobs(x)])
    log.cnet.add_layer(layer)
    return x

最后在下面添加了一行:

F.relu6=Rp(F.relu6,_relu6)

都依然会报错:

139789595636720:add_blob1 was added to blobs
Add blob       add_blob1       : torch.Size([1, 16, 112, 112])
139789595637440:batch_norm_blob1 getting
Traceback (most recent call last):
  File "example/mobilenet_pytorch_to_caffe.py", line 20, in <module>
    pytorch_to_caffe.trans_net(net, input, name)
  File "/home/sonny/PytorchToCaffe/pytorch_to_caffe.py", line 658, in trans_net
    out = net.forward(input_var)
  File "mobilenet/mobilenet.py", line 182, in forward
    out = self.hs1(self.bn1(self.conv1(x)))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "mobilenet/mobilenet.py", line 15, in forward
    out = x * F.relu6(x + 3, inplace=True) / 6
  File "/home/sonny/PytorchToCaffe/pytorch_to_caffe.py", line 486, in _add
    bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
  File "/home/sonny/PytorchToCaffe/pytorch_to_caffe.py", line 88, in blobs
    print("{}:{} getting".format(var, self._blobs[var]))
  File "/home/sonny/PytorchToCaffe/pytorch_to_caffe.py", line 31, in __getitem__
    return self.data[key]
KeyError: 10914560

博主知道怎么解决吗?求教

Error when loading Alexnet transformed in caffe

Hello,

In order to translate Alexnet from Pytorch to Caffe, I added the lines describe in #5 (comment).

The script throws Transform Completed and I get my prototxt and caffemodel files but when I try to load them from Caffe I get the following error:

F0607 13:28:49.344043 5379 reshape_layer.cpp:87] Check failed: top[0]->count() == bottom[0]->count() (9216 vs. 1024) output count must match input count
*** Check failure stack trace: ***
Aborted (core dumped)

I cant find the solution to this problem... any idea?

Big thanks! :)

MaxPool2DWithIndicesBackward

Traceback (most recent call last):
File "main.py", line 322, in
pytorch2caffe(dummy_input, dummpy_output, 'enet-pytorch2caffe.prototxt', 'enet-pytorch2caffe.caffemodel')
File "C:\pytorch2caffe\pytorch2caffe.py", line 47, in pytorch2caffe
net_info = pytorch2prototxt(input_var, output_var)
File "C:\pytorch2caffe\pytorch2caffe.py", line 335, in pytorch2prototxt
add_layer(output_var.grad_fn)
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 163, in add_layer
top_name = add_layer(u[0])
File "C:\pytorch2caffe\pytorch2caffe.py", line 175, in add_layer
layer['type'] = layer_dict[parent_type]
KeyError: 'MaxPool2DWithIndicesBackward'

depthwise

in mobilenet there are depthwise operations, could you add support for it??

不能 print?

可能因為 traceback 的關係所以加上 print 會有 error?

例如:

    def __call__(self,*args,**kwargs):
        print (args) # 加上這行會之後會有 error
        if not NET_INITTED:
            return self.raw(*args,**kwargs)
        for stack in traceback.walk_stack(None):
            if 'self' in stack[0].f_locals:
                layer=stack[0].f_locals['self']
                if layer in layer_names:
                    log.pytorch_layer_name=layer_names[layer]
                    print(layer_names[layer])
                    break
        out=self.obj(self.raw,*args,**kwargs)
        # if isinstance(out,Variable):
        #     out=[out]
        return out

想追蹤流程的話可以怎麼做?

模型转换之后精度相差较大

您好,利用您的工具,进行pytorch转caffe以及验证之后,一切显示正常,但是我在caffe之下大量测试模型的时候,caffe模型的map相较于pytorch模型的map低了10个点,本人的模型为resnet-50。我也试着从多方面查找问题,但还是没有头绪,希望您能给些建议,谢谢!

GPU memory are large

when i translate mobilenet/mobilenetv2 to caffe,and load the caffe model in python, the GPU memory is huge; even the mobilenetv2 with width multi of 0.25

pytorch2caffe error??

my env is pytorch 1.0.1.post2
and i want to convert pytorch model to caffe;
1 run python3 example/alexnet_pytorch_to_caffe.py
140558256725592:view_blob1 was added to blobs Add blob view_blob1 : torch.Size([1, 9216]) Traceback (most recent call last): File "example/alexnet_pytorch_to_caffe.py", line 12, in <module> pytorch_to_caffe.trans_net(net,input,name) File "./pytorch_to_caffe.py", line 612, in trans_net out = net.forward(input_var) File "/usr/local/lib/python3.5/dist-packages/torchvision/models/alexnet.py", line 46, in forward x = x.view(x.size(0), 256 * 6 * 6) File "./pytorch_to_caffe.py", line 410, in _view bottom=[log.blobs(input)],top=top_blobs) File "./pytorch_to_caffe.py", line 88, in blobs print("{}:{} getting".format(var, self._blobs[var])) File "./pytorch_to_caffe.py", line 31, in __getitem__ return self.data[key] KeyError: 140558256725160
2 run python3 example/resnet_pytorch_2_caffe.py
Traceback (most recent call last): File "example/resnet_pytorch_2_caffe.py", line 11, in <module> checkpoint = torch.load("/home/luna/mmdnn/imagenet_resnet18.pth") File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 368, in load return _load(f, map_location, pickle_module) File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 542, in _load result = unpickler.load() UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1105: ordinal not in range(128)

可以转换两个输入的网络吗?

你好,请问本工程是否可以转换类似双流处理的网络,他的forward函数如下:
`
def forward(self, x,target=None):
N, C, T, V, M = x.size() # N0, C1, T2, V3, M4
motion = x[:,:,1::,:,:]-x[:,:,0:-1,:,:]
motion = motion.permute(0,1,4,2,3).contiguous().view(N,C*M,T-1,V)
motion = F.upsample(motion, size=(T,V), mode='bilinear',align_corners=False).contiguous().view(N,C,M,T,V).permute(0,1,3,4,2)

    logits = []
    for i in range(self.num_person):
        # position
        # N0,C1,T2,V3 point-level
        out = self.conv1(x[:,:,:,:,i])

        out = self.conv2(out)
        # N0,V1,T2,C3, global level
        out = out.permute(0,3,2,1).contiguous()
        out = self.conv3(out)
        out_p = self.conv4(out)


        # motion
        # N0,T1,V2,C3 point-level
        out = self.conv1m(motion[:,:,:,:,i])
        out = self.conv2m(out)
        # N0,V1,T2,C3, global level
        out = out.permute(0, 3, 2, 1).contiguous()
        out = self.conv3m(out)
        out_m = self.conv4m(out)

        # concat
        out = torch.cat((out_p,out_m),dim=1)
        out = self.conv5(out)
        out = self.conv6(out)

        logits.append(out)

    # max out logits

    out = torch.max(logits[0], logits[1])

    out = out.view(out.size(0), -1)
    out = self.fc7(out)
    out = self.fc8(out)

    t = out
    print(t)
    assert not ((t != t).any())# find out nan in tensor
    assert not (t.abs().sum() == 0) # find out 0 tensor

    return out

第二个输入是根据第一个输入计算得到的,函数总的motion,我在转换的时候错误提示为:
Traceback (most recent call last):
File "hcn/convert.py", line 297, in
pytorch_to_caffe.trans_net(model,x,name)
File "./pytorch_to_caffe.py", line 625, in trans_net
out = net.forward(input_var)
File "hcn/convert.py", line 159, in forward
motion = x[:,:,1::,:,:]-x[:,:,0:-1,:,:]
File "./pytorch_to_caffe.py", line 476, in _sub
b1 = log.blobs(input)
File "./pytorch_to_caffe.py", line 91, in blobs
print(self._blobs[var])
File "./pytorch_to_caffe.py", line 32, in getitem
return self.data[key]
KeyError: 140445193528256
`
请帮忙看下,是否支持此类模型的转换~非常感谢

AttributeError: 'module' object has no attribute 'walk_stack'

Hi,I got this error in pytorch_to_caffe.py", line 530, in call.
def __call__(self,*args,**kwargs): 527 if not NET_INITTED: 528 return self.raw(*args,**kwargs) 529 print(dir(traceback)) 530 for stack in traceback.walk_stack(None): 531 if 'self' in stack[0].f_locals: 532 layer=stack[0].f_locals['self'] 533 if layer in layer_names: 534 log.pytorch_layer_name=layer_names[layer] 535 print(layer_names[layer]) 536 break
And I print the traceback's attribute:
['all', 'builtins', 'doc', 'file', 'name', 'package', '_format_final_exc_line', '_print', '_some_str', 'extract_stack', 'extract_tb', 'format_exc', 'format_exception', 'format_exception_only', 'format_list', 'format_stack', 'format_tb', 'linecache', 'print_exc', 'print_exception', 'print_last', 'print_list', 'print_stack', 'print_tb', 'sys', 'tb_lineno', 'types']
There is no walk_stack.
How should I overcome this error?

the diff of resnet18 between caffe and pytorch model is large

I transfer pytorch to caffe of resnet18.
But the diff is too large.
Input is random, the diff is around 0.17.

How could I get to know where is the error?

fc_blob1 pytorch_shape: (1000,) caffe_shape: (1000,) output_diff: 0.161286

I delete the pad:1 in max_pool1 as it show error: F0429 07:03:48.171044 146212 net.cpp:757] Cannot copy param 0 weights from layer 'fc1'; shape mismatch. Source param shape is 1000 512 (512000); target param shape is 1000 2048 (2048000). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.

请问,NET_INITED有什么作用

您好,作者。请问,为什么我需要将NET_INITED=False才能正常运行,如果我设置成False ,对我训练好的模型有什么影响吗

has no field named "upsample_param" when ReadProtoFromTextFile

I have an interpolate layer in my pytorch model ,and it transformed to upsample layer in caffe. But when i used the transformed protobuf and caffemodel , something error.

[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 1424:18: Message type "caffe.LayerParameter" has no field named "upsample_param".

F0510 09:12:19.089694 25854 upgrade_proto.cpp:90] Check failed: ReadProtoFromTextFile(param_file,param) Failed to parse NetParameter file: enhance_model.prototxt

it seem like thet are no upsample layer in caffe? should i complie a caffe with upsample layer by myself ? (i found somebody add interp layer to caffe ,can it help to solve the problem?)
thanks~

Convert model accuracy drop about 8%

Hi, I use your project to convert inceptionV3 from pytorch to caffe. The original pytorch model is from', the Top-1 accuracy is about 77%, after I conveted, I test the caffemodel and prototxt on Imagenet val dataset and I got Top-1 accuracy is 69%.
Can you help me?
The prototxt is as below:
`name: "inception_v3"

layer {
name: "data"
#type: "Data"
type: "ImageData"
top: "blob1"
top: "label"
transform_param {
scale: 0.0078125
mirror: false
crop_size: 299
mean_value: 128.0
mean_value: 128.0
mean_value: 128.0
}
image_data_param {
source: "~/imagenet/val.txt"
new_height: 324
new_width: 324
batch_size: 20
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "blob1"
top: "conv_blob1"
convolution_param {
num_output: 32
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 2
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm1"
type: "BatchNorm"
bottom: "conv_blob1"
top: "batch_norm_blob1"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale1"
type: "Scale"
bottom: "batch_norm_blob1"
top: "batch_norm_blob1"
scale_param {
bias_term: true
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "batch_norm_blob1"
top: "relu_blob1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "relu_blob1"
top: "conv_blob2"
convolution_param {
num_output: 32
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm2"
type: "BatchNorm"
bottom: "conv_blob2"
top: "batch_norm_blob2"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale2"
type: "Scale"
bottom: "batch_norm_blob2"
top: "batch_norm_blob2"
scale_param {
bias_term: true
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "batch_norm_blob2"
top: "relu_blob2"
}
layer {
name: "conv3"
type: "Convolution"
bottom: "relu_blob2"
top: "conv_blob3"
convolution_param {
num_output: 64
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm3"
type: "BatchNorm"
bottom: "conv_blob3"
top: "batch_norm_blob3"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale3"
type: "Scale"
bottom: "batch_norm_blob3"
top: "batch_norm_blob3"
scale_param {
bias_term: true
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "batch_norm_blob3"
top: "relu_blob3"
}
layer {
name: "max_pool1"
type: "Pooling"
bottom: "relu_blob3"
top: "max_pool_blob1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "conv4"
type: "Convolution"
bottom: "max_pool_blob1"
top: "conv_blob4"
convolution_param {
num_output: 80
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm4"
type: "BatchNorm"
bottom: "conv_blob4"
top: "batch_norm_blob4"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale4"
type: "Scale"
bottom: "batch_norm_blob4"
top: "batch_norm_blob4"
scale_param {
bias_term: true
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "batch_norm_blob4"
top: "relu_blob4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "relu_blob4"
top: "conv_blob5"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm5"
type: "BatchNorm"
bottom: "conv_blob5"
top: "batch_norm_blob5"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale5"
type: "Scale"
bottom: "batch_norm_blob5"
top: "batch_norm_blob5"
scale_param {
bias_term: true
}
}
layer {
name: "relu5"
type: "ReLU"
bottom: "batch_norm_blob5"
top: "relu_blob5"
}
layer {
name: "max_pool2"
type: "Pooling"
bottom: "relu_blob5"
top: "max_pool_blob2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "conv6"
type: "Convolution"
bottom: "max_pool_blob2"
top: "conv_blob6"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm6"
type: "BatchNorm"
bottom: "conv_blob6"
top: "batch_norm_blob6"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale6"
type: "Scale"
bottom: "batch_norm_blob6"
top: "batch_norm_blob6"
scale_param {
bias_term: true
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "batch_norm_blob6"
top: "relu_blob6"
}
layer {
name: "conv7"
type: "Convolution"
bottom: "max_pool_blob2"
top: "conv_blob7"
convolution_param {
num_output: 48
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm7"
type: "BatchNorm"
bottom: "conv_blob7"
top: "batch_norm_blob7"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale7"
type: "Scale"
bottom: "batch_norm_blob7"
top: "batch_norm_blob7"
scale_param {
bias_term: true
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "batch_norm_blob7"
top: "relu_blob7"
}
layer {
name: "conv8"
type: "Convolution"
bottom: "relu_blob7"
top: "conv_blob8"
convolution_param {
num_output: 64
bias_term: false
pad: 2
kernel_size: 5
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm8"
type: "BatchNorm"
bottom: "conv_blob8"
top: "batch_norm_blob8"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale8"
type: "Scale"
bottom: "batch_norm_blob8"
top: "batch_norm_blob8"
scale_param {
bias_term: true
}
}
layer {
name: "relu8"
type: "ReLU"
bottom: "batch_norm_blob8"
top: "relu_blob8"
}
layer {
name: "conv9"
type: "Convolution"
bottom: "max_pool_blob2"
top: "conv_blob9"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm9"
type: "BatchNorm"
bottom: "conv_blob9"
top: "batch_norm_blob9"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale9"
type: "Scale"
bottom: "batch_norm_blob9"
top: "batch_norm_blob9"
scale_param {
bias_term: true
}
}
layer {
name: "relu9"
type: "ReLU"
bottom: "batch_norm_blob9"
top: "relu_blob9"
}
layer {
name: "conv10"
type: "Convolution"
bottom: "relu_blob9"
top: "conv_blob10"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm10"
type: "BatchNorm"
bottom: "conv_blob10"
top: "batch_norm_blob10"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale10"
type: "Scale"
bottom: "batch_norm_blob10"
top: "batch_norm_blob10"
scale_param {
bias_term: true
}
}
layer {
name: "relu10"
type: "ReLU"
bottom: "batch_norm_blob10"
top: "relu_blob10"
}
layer {
name: "conv11"
type: "Convolution"
bottom: "relu_blob10"
top: "conv_blob11"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm11"
type: "BatchNorm"
bottom: "conv_blob11"
top: "batch_norm_blob11"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale11"
type: "Scale"
bottom: "batch_norm_blob11"
top: "batch_norm_blob11"
scale_param {
bias_term: true
}
}
layer {
name: "relu11"
type: "ReLU"
bottom: "batch_norm_blob11"
top: "relu_blob11"
}
layer {
name: "ave_pool1"
type: "Pooling"
bottom: "max_pool_blob2"
top: "ave_pool_blob1"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv12"
type: "Convolution"
bottom: "ave_pool_blob1"
top: "conv_blob12"
convolution_param {
num_output: 32
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm12"
type: "BatchNorm"
bottom: "conv_blob12"
top: "batch_norm_blob12"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale12"
type: "Scale"
bottom: "batch_norm_blob12"
top: "batch_norm_blob12"
scale_param {
bias_term: true
}
}
layer {
name: "relu12"
type: "ReLU"
bottom: "batch_norm_blob12"
top: "relu_blob12"
}
layer {
name: "cat1"
type: "Concat"
bottom: "relu_blob6"
bottom: "relu_blob8"
bottom: "relu_blob11"
bottom: "relu_blob12"
top: "cat_blob1"
concat_param {
axis: 1
}
}
layer {
name: "conv13"
type: "Convolution"
bottom: "cat_blob1"
top: "conv_blob13"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm13"
type: "BatchNorm"
bottom: "conv_blob13"
top: "batch_norm_blob13"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale13"
type: "Scale"
bottom: "batch_norm_blob13"
top: "batch_norm_blob13"
scale_param {
bias_term: true
}
}
layer {
name: "relu13"
type: "ReLU"
bottom: "batch_norm_blob13"
top: "relu_blob13"
}
layer {
name: "conv14"
type: "Convolution"
bottom: "cat_blob1"
top: "conv_blob14"
convolution_param {
num_output: 48
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm14"
type: "BatchNorm"
bottom: "conv_blob14"
top: "batch_norm_blob14"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale14"
type: "Scale"
bottom: "batch_norm_blob14"
top: "batch_norm_blob14"
scale_param {
bias_term: true
}
}
layer {
name: "relu14"
type: "ReLU"
bottom: "batch_norm_blob14"
top: "relu_blob14"
}
layer {
name: "conv15"
type: "Convolution"
bottom: "relu_blob14"
top: "conv_blob15"
convolution_param {
num_output: 64
bias_term: false
pad: 2
kernel_size: 5
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm15"
type: "BatchNorm"
bottom: "conv_blob15"
top: "batch_norm_blob15"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale15"
type: "Scale"
bottom: "batch_norm_blob15"
top: "batch_norm_blob15"
scale_param {
bias_term: true
}
}
layer {
name: "relu15"
type: "ReLU"
bottom: "batch_norm_blob15"
top: "relu_blob15"
}
layer {
name: "conv16"
type: "Convolution"
bottom: "cat_blob1"
top: "conv_blob16"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm16"
type: "BatchNorm"
bottom: "conv_blob16"
top: "batch_norm_blob16"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale16"
type: "Scale"
bottom: "batch_norm_blob16"
top: "batch_norm_blob16"
scale_param {
bias_term: true
}
}
layer {
name: "relu16"
type: "ReLU"
bottom: "batch_norm_blob16"
top: "relu_blob16"
}
layer {
name: "conv17"
type: "Convolution"
bottom: "relu_blob16"
top: "conv_blob17"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm17"
type: "BatchNorm"
bottom: "conv_blob17"
top: "batch_norm_blob17"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale17"
type: "Scale"
bottom: "batch_norm_blob17"
top: "batch_norm_blob17"
scale_param {
bias_term: true
}
}
layer {
name: "relu17"
type: "ReLU"
bottom: "batch_norm_blob17"
top: "relu_blob17"
}
layer {
name: "conv18"
type: "Convolution"
bottom: "relu_blob17"
top: "conv_blob18"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm18"
type: "BatchNorm"
bottom: "conv_blob18"
top: "batch_norm_blob18"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale18"
type: "Scale"
bottom: "batch_norm_blob18"
top: "batch_norm_blob18"
scale_param {
bias_term: true
}
}
layer {
name: "relu18"
type: "ReLU"
bottom: "batch_norm_blob18"
top: "relu_blob18"
}
layer {
name: "ave_pool2"
type: "Pooling"
bottom: "cat_blob1"
top: "ave_pool_blob2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv19"
type: "Convolution"
bottom: "ave_pool_blob2"
top: "conv_blob19"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm19"
type: "BatchNorm"
bottom: "conv_blob19"
top: "batch_norm_blob19"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale19"
type: "Scale"
bottom: "batch_norm_blob19"
top: "batch_norm_blob19"
scale_param {
bias_term: true
}
}
layer {
name: "relu19"
type: "ReLU"
bottom: "batch_norm_blob19"
top: "relu_blob19"
}
layer {
name: "cat2"
type: "Concat"
bottom: "relu_blob13"
bottom: "relu_blob15"
bottom: "relu_blob18"
bottom: "relu_blob19"
top: "cat_blob2"
concat_param {
axis: 1
}
}
layer {
name: "conv20"
type: "Convolution"
bottom: "cat_blob2"
top: "conv_blob20"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm20"
type: "BatchNorm"
bottom: "conv_blob20"
top: "batch_norm_blob20"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale20"
type: "Scale"
bottom: "batch_norm_blob20"
top: "batch_norm_blob20"
scale_param {
bias_term: true
}
}
layer {
name: "relu20"
type: "ReLU"
bottom: "batch_norm_blob20"
top: "relu_blob20"
}
layer {
name: "conv21"
type: "Convolution"
bottom: "cat_blob2"
top: "conv_blob21"
convolution_param {
num_output: 48
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm21"
type: "BatchNorm"
bottom: "conv_blob21"
top: "batch_norm_blob21"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale21"
type: "Scale"
bottom: "batch_norm_blob21"
top: "batch_norm_blob21"
scale_param {
bias_term: true
}
}
layer {
name: "relu21"
type: "ReLU"
bottom: "batch_norm_blob21"
top: "relu_blob21"
}
layer {
name: "conv22"
type: "Convolution"
bottom: "relu_blob21"
top: "conv_blob22"
convolution_param {
num_output: 64
bias_term: false
pad: 2
kernel_size: 5
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm22"
type: "BatchNorm"
bottom: "conv_blob22"
top: "batch_norm_blob22"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale22"
type: "Scale"
bottom: "batch_norm_blob22"
top: "batch_norm_blob22"
scale_param {
bias_term: true
}
}
layer {
name: "relu22"
type: "ReLU"
bottom: "batch_norm_blob22"
top: "relu_blob22"
}
layer {
name: "conv23"
type: "Convolution"
bottom: "cat_blob2"
top: "conv_blob23"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm23"
type: "BatchNorm"
bottom: "conv_blob23"
top: "batch_norm_blob23"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale23"
type: "Scale"
bottom: "batch_norm_blob23"
top: "batch_norm_blob23"
scale_param {
bias_term: true
}
}
layer {
name: "relu23"
type: "ReLU"
bottom: "batch_norm_blob23"
top: "relu_blob23"
}
layer {
name: "conv24"
type: "Convolution"
bottom: "relu_blob23"
top: "conv_blob24"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm24"
type: "BatchNorm"
bottom: "conv_blob24"
top: "batch_norm_blob24"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale24"
type: "Scale"
bottom: "batch_norm_blob24"
top: "batch_norm_blob24"
scale_param {
bias_term: true
}
}
layer {
name: "relu24"
type: "ReLU"
bottom: "batch_norm_blob24"
top: "relu_blob24"
}
layer {
name: "conv25"
type: "Convolution"
bottom: "relu_blob24"
top: "conv_blob25"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm25"
type: "BatchNorm"
bottom: "conv_blob25"
top: "batch_norm_blob25"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale25"
type: "Scale"
bottom: "batch_norm_blob25"
top: "batch_norm_blob25"
scale_param {
bias_term: true
}
}
layer {
name: "relu25"
type: "ReLU"
bottom: "batch_norm_blob25"
top: "relu_blob25"
}
layer {
name: "ave_pool3"
type: "Pooling"
bottom: "cat_blob2"
top: "ave_pool_blob3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv26"
type: "Convolution"
bottom: "ave_pool_blob3"
top: "conv_blob26"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm26"
type: "BatchNorm"
bottom: "conv_blob26"
top: "batch_norm_blob26"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale26"
type: "Scale"
bottom: "batch_norm_blob26"
top: "batch_norm_blob26"
scale_param {
bias_term: true
}
}
layer {
name: "relu26"
type: "ReLU"
bottom: "batch_norm_blob26"
top: "relu_blob26"
}
layer {
name: "cat3"
type: "Concat"
bottom: "relu_blob20"
bottom: "relu_blob22"
bottom: "relu_blob25"
bottom: "relu_blob26"
top: "cat_blob3"
concat_param {
axis: 1
}
}
layer {
name: "conv27"
type: "Convolution"
bottom: "cat_blob3"
top: "conv_blob27"
convolution_param {
num_output: 384
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 2
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm27"
type: "BatchNorm"
bottom: "conv_blob27"
top: "batch_norm_blob27"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale27"
type: "Scale"
bottom: "batch_norm_blob27"
top: "batch_norm_blob27"
scale_param {
bias_term: true
}
}
layer {
name: "relu27"
type: "ReLU"
bottom: "batch_norm_blob27"
top: "relu_blob27"
}
layer {
name: "conv28"
type: "Convolution"
bottom: "cat_blob3"
top: "conv_blob28"
convolution_param {
num_output: 64
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm28"
type: "BatchNorm"
bottom: "conv_blob28"
top: "batch_norm_blob28"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale28"
type: "Scale"
bottom: "batch_norm_blob28"
top: "batch_norm_blob28"
scale_param {
bias_term: true
}
}
layer {
name: "relu28"
type: "ReLU"
bottom: "batch_norm_blob28"
top: "relu_blob28"
}
layer {
name: "conv29"
type: "Convolution"
bottom: "relu_blob28"
top: "conv_blob29"
convolution_param {
num_output: 96
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm29"
type: "BatchNorm"
bottom: "conv_blob29"
top: "batch_norm_blob29"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale29"
type: "Scale"
bottom: "batch_norm_blob29"
top: "batch_norm_blob29"
scale_param {
bias_term: true
}
}
layer {
name: "relu29"
type: "ReLU"
bottom: "batch_norm_blob29"
top: "relu_blob29"
}
layer {
name: "conv30"
type: "Convolution"
bottom: "relu_blob29"
top: "conv_blob30"
convolution_param {
num_output: 96
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 2
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm30"
type: "BatchNorm"
bottom: "conv_blob30"
top: "batch_norm_blob30"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale30"
type: "Scale"
bottom: "batch_norm_blob30"
top: "batch_norm_blob30"
scale_param {
bias_term: true
}
}
layer {
name: "relu30"
type: "ReLU"
bottom: "batch_norm_blob30"
top: "relu_blob30"
}
layer {
name: "max_pool3"
type: "Pooling"
bottom: "cat_blob3"
top: "max_pool_blob3"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "cat4"
type: "Concat"
bottom: "relu_blob27"
bottom: "relu_blob30"
bottom: "max_pool_blob3"
top: "cat_blob4"
concat_param {
axis: 1
}
}
layer {
name: "conv31"
type: "Convolution"
bottom: "cat_blob4"
top: "conv_blob31"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm31"
type: "BatchNorm"
bottom: "conv_blob31"
top: "batch_norm_blob31"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale31"
type: "Scale"
bottom: "batch_norm_blob31"
top: "batch_norm_blob31"
scale_param {
bias_term: true
}
}
layer {
name: "relu31"
type: "ReLU"
bottom: "batch_norm_blob31"
top: "relu_blob31"
}
layer {
name: "conv32"
type: "Convolution"
bottom: "cat_blob4"
top: "conv_blob32"
convolution_param {
num_output: 128
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm32"
type: "BatchNorm"
bottom: "conv_blob32"
top: "batch_norm_blob32"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale32"
type: "Scale"
bottom: "batch_norm_blob32"
top: "batch_norm_blob32"
scale_param {
bias_term: true
}
}
layer {
name: "relu32"
type: "ReLU"
bottom: "batch_norm_blob32"
top: "relu_blob32"
}
layer {
name: "conv33"
type: "Convolution"
bottom: "relu_blob32"
top: "conv_blob33"
convolution_param {
num_output: 128
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm33"
type: "BatchNorm"
bottom: "conv_blob33"
top: "batch_norm_blob33"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale33"
type: "Scale"
bottom: "batch_norm_blob33"
top: "batch_norm_blob33"
scale_param {
bias_term: true
}
}
layer {
name: "relu33"
type: "ReLU"
bottom: "batch_norm_blob33"
top: "relu_blob33"
}
layer {
name: "conv34"
type: "Convolution"
bottom: "relu_blob33"
top: "conv_blob34"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm34"
type: "BatchNorm"
bottom: "conv_blob34"
top: "batch_norm_blob34"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale34"
type: "Scale"
bottom: "batch_norm_blob34"
top: "batch_norm_blob34"
scale_param {
bias_term: true
}
}
layer {
name: "relu34"
type: "ReLU"
bottom: "batch_norm_blob34"
top: "relu_blob34"
}
layer {
name: "conv35"
type: "Convolution"
bottom: "cat_blob4"
top: "conv_blob35"
convolution_param {
num_output: 128
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm35"
type: "BatchNorm"
bottom: "conv_blob35"
top: "batch_norm_blob35"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale35"
type: "Scale"
bottom: "batch_norm_blob35"
top: "batch_norm_blob35"
scale_param {
bias_term: true
}
}
layer {
name: "relu35"
type: "ReLU"
bottom: "batch_norm_blob35"
top: "relu_blob35"
}
layer {
name: "conv36"
type: "Convolution"
bottom: "relu_blob35"
top: "conv_blob36"
convolution_param {
num_output: 128
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm36"
type: "BatchNorm"
bottom: "conv_blob36"
top: "batch_norm_blob36"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale36"
type: "Scale"
bottom: "batch_norm_blob36"
top: "batch_norm_blob36"
scale_param {
bias_term: true
}
}
layer {
name: "relu36"
type: "ReLU"
bottom: "batch_norm_blob36"
top: "relu_blob36"
}
layer {
name: "conv37"
type: "Convolution"
bottom: "relu_blob36"
top: "conv_blob37"
convolution_param {
num_output: 128
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm37"
type: "BatchNorm"
bottom: "conv_blob37"
top: "batch_norm_blob37"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale37"
type: "Scale"
bottom: "batch_norm_blob37"
top: "batch_norm_blob37"
scale_param {
bias_term: true
}
}
layer {
name: "relu37"
type: "ReLU"
bottom: "batch_norm_blob37"
top: "relu_blob37"
}
layer {
name: "conv38"
type: "Convolution"
bottom: "relu_blob37"
top: "conv_blob38"
convolution_param {
num_output: 128
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm38"
type: "BatchNorm"
bottom: "conv_blob38"
top: "batch_norm_blob38"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale38"
type: "Scale"
bottom: "batch_norm_blob38"
top: "batch_norm_blob38"
scale_param {
bias_term: true
}
}
layer {
name: "relu38"
type: "ReLU"
bottom: "batch_norm_blob38"
top: "relu_blob38"
}
layer {
name: "conv39"
type: "Convolution"
bottom: "relu_blob38"
top: "conv_blob39"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm39"
type: "BatchNorm"
bottom: "conv_blob39"
top: "batch_norm_blob39"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale39"
type: "Scale"
bottom: "batch_norm_blob39"
top: "batch_norm_blob39"
scale_param {
bias_term: true
}
}
layer {
name: "relu39"
type: "ReLU"
bottom: "batch_norm_blob39"
top: "relu_blob39"
}
layer {
name: "ave_pool4"
type: "Pooling"
bottom: "cat_blob4"
top: "ave_pool_blob4"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv40"
type: "Convolution"
bottom: "ave_pool_blob4"
top: "conv_blob40"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm40"
type: "BatchNorm"
bottom: "conv_blob40"
top: "batch_norm_blob40"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale40"
type: "Scale"
bottom: "batch_norm_blob40"
top: "batch_norm_blob40"
scale_param {
bias_term: true
}
}
layer {
name: "relu40"
type: "ReLU"
bottom: "batch_norm_blob40"
top: "relu_blob40"
}
layer {
name: "cat5"
type: "Concat"
bottom: "relu_blob31"
bottom: "relu_blob34"
bottom: "relu_blob39"
bottom: "relu_blob40"
top: "cat_blob5"
concat_param {
axis: 1
}
}
layer {
name: "conv41"
type: "Convolution"
bottom: "cat_blob5"
top: "conv_blob41"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm41"
type: "BatchNorm"
bottom: "conv_blob41"
top: "batch_norm_blob41"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale41"
type: "Scale"
bottom: "batch_norm_blob41"
top: "batch_norm_blob41"
scale_param {
bias_term: true
}
}
layer {
name: "relu41"
type: "ReLU"
bottom: "batch_norm_blob41"
top: "relu_blob41"
}
layer {
name: "conv42"
type: "Convolution"
bottom: "cat_blob5"
top: "conv_blob42"
convolution_param {
num_output: 160
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm42"
type: "BatchNorm"
bottom: "conv_blob42"
top: "batch_norm_blob42"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale42"
type: "Scale"
bottom: "batch_norm_blob42"
top: "batch_norm_blob42"
scale_param {
bias_term: true
}
}
layer {
name: "relu42"
type: "ReLU"
bottom: "batch_norm_blob42"
top: "relu_blob42"
}
layer {
name: "conv43"
type: "Convolution"
bottom: "relu_blob42"
top: "conv_blob43"
convolution_param {
num_output: 160
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm43"
type: "BatchNorm"
bottom: "conv_blob43"
top: "batch_norm_blob43"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale43"
type: "Scale"
bottom: "batch_norm_blob43"
top: "batch_norm_blob43"
scale_param {
bias_term: true
}
}
layer {
name: "relu43"
type: "ReLU"
bottom: "batch_norm_blob43"
top: "relu_blob43"
}
layer {
name: "conv44"
type: "Convolution"
bottom: "relu_blob43"
top: "conv_blob44"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm44"
type: "BatchNorm"
bottom: "conv_blob44"
top: "batch_norm_blob44"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale44"
type: "Scale"
bottom: "batch_norm_blob44"
top: "batch_norm_blob44"
scale_param {
bias_term: true
}
}
layer {
name: "relu44"
type: "ReLU"
bottom: "batch_norm_blob44"
top: "relu_blob44"
}
layer {
name: "conv45"
type: "Convolution"
bottom: "cat_blob5"
top: "conv_blob45"
convolution_param {
num_output: 160
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm45"
type: "BatchNorm"
bottom: "conv_blob45"
top: "batch_norm_blob45"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale45"
type: "Scale"
bottom: "batch_norm_blob45"
top: "batch_norm_blob45"
scale_param {
bias_term: true
}
}
layer {
name: "relu45"
type: "ReLU"
bottom: "batch_norm_blob45"
top: "relu_blob45"
}
layer {
name: "conv46"
type: "Convolution"
bottom: "relu_blob45"
top: "conv_blob46"
convolution_param {
num_output: 160
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm46"
type: "BatchNorm"
bottom: "conv_blob46"
top: "batch_norm_blob46"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale46"
type: "Scale"
bottom: "batch_norm_blob46"
top: "batch_norm_blob46"
scale_param {
bias_term: true
}
}
layer {
name: "relu46"
type: "ReLU"
bottom: "batch_norm_blob46"
top: "relu_blob46"
}
layer {
name: "conv47"
type: "Convolution"
bottom: "relu_blob46"
top: "conv_blob47"
convolution_param {
num_output: 160
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm47"
type: "BatchNorm"
bottom: "conv_blob47"
top: "batch_norm_blob47"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale47"
type: "Scale"
bottom: "batch_norm_blob47"
top: "batch_norm_blob47"
scale_param {
bias_term: true
}
}
layer {
name: "relu47"
type: "ReLU"
bottom: "batch_norm_blob47"
top: "relu_blob47"
}
layer {
name: "conv48"
type: "Convolution"
bottom: "relu_blob47"
top: "conv_blob48"
convolution_param {
num_output: 160
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm48"
type: "BatchNorm"
bottom: "conv_blob48"
top: "batch_norm_blob48"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale48"
type: "Scale"
bottom: "batch_norm_blob48"
top: "batch_norm_blob48"
scale_param {
bias_term: true
}
}
layer {
name: "relu48"
type: "ReLU"
bottom: "batch_norm_blob48"
top: "relu_blob48"
}
layer {
name: "conv49"
type: "Convolution"
bottom: "relu_blob48"
top: "conv_blob49"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm49"
type: "BatchNorm"
bottom: "conv_blob49"
top: "batch_norm_blob49"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale49"
type: "Scale"
bottom: "batch_norm_blob49"
top: "batch_norm_blob49"
scale_param {
bias_term: true
}
}
layer {
name: "relu49"
type: "ReLU"
bottom: "batch_norm_blob49"
top: "relu_blob49"
}
layer {
name: "ave_pool5"
type: "Pooling"
bottom: "cat_blob5"
top: "ave_pool_blob5"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv50"
type: "Convolution"
bottom: "ave_pool_blob5"
top: "conv_blob50"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm50"
type: "BatchNorm"
bottom: "conv_blob50"
top: "batch_norm_blob50"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale50"
type: "Scale"
bottom: "batch_norm_blob50"
top: "batch_norm_blob50"
scale_param {
bias_term: true
}
}
layer {
name: "relu50"
type: "ReLU"
bottom: "batch_norm_blob50"
top: "relu_blob50"
}
layer {
name: "cat6"
type: "Concat"
bottom: "relu_blob41"
bottom: "relu_blob44"
bottom: "relu_blob49"
bottom: "relu_blob50"
top: "cat_blob6"
concat_param {
axis: 1
}
}
layer {
name: "conv51"
type: "Convolution"
bottom: "cat_blob6"
top: "conv_blob51"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm51"
type: "BatchNorm"
bottom: "conv_blob51"
top: "batch_norm_blob51"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale51"
type: "Scale"
bottom: "batch_norm_blob51"
top: "batch_norm_blob51"
scale_param {
bias_term: true
}
}
layer {
name: "relu51"
type: "ReLU"
bottom: "batch_norm_blob51"
top: "relu_blob51"
}
layer {
name: "conv52"
type: "Convolution"
bottom: "cat_blob6"
top: "conv_blob52"
convolution_param {
num_output: 160
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm52"
type: "BatchNorm"
bottom: "conv_blob52"
top: "batch_norm_blob52"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale52"
type: "Scale"
bottom: "batch_norm_blob52"
top: "batch_norm_blob52"
scale_param {
bias_term: true
}
}
layer {
name: "relu52"
type: "ReLU"
bottom: "batch_norm_blob52"
top: "relu_blob52"
}
layer {
name: "conv53"
type: "Convolution"
bottom: "relu_blob52"
top: "conv_blob53"
convolution_param {
num_output: 160
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm53"
type: "BatchNorm"
bottom: "conv_blob53"
top: "batch_norm_blob53"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale53"
type: "Scale"
bottom: "batch_norm_blob53"
top: "batch_norm_blob53"
scale_param {
bias_term: true
}
}
layer {
name: "relu53"
type: "ReLU"
bottom: "batch_norm_blob53"
top: "relu_blob53"
}
layer {
name: "conv54"
type: "Convolution"
bottom: "relu_blob53"
top: "conv_blob54"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm54"
type: "BatchNorm"
bottom: "conv_blob54"
top: "batch_norm_blob54"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale54"
type: "Scale"
bottom: "batch_norm_blob54"
top: "batch_norm_blob54"
scale_param {
bias_term: true
}
}
layer {
name: "relu54"
type: "ReLU"
bottom: "batch_norm_blob54"
top: "relu_blob54"
}
layer {
name: "conv55"
type: "Convolution"
bottom: "cat_blob6"
top: "conv_blob55"
convolution_param {
num_output: 160
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm55"
type: "BatchNorm"
bottom: "conv_blob55"
top: "batch_norm_blob55"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale55"
type: "Scale"
bottom: "batch_norm_blob55"
top: "batch_norm_blob55"
scale_param {
bias_term: true
}
}
layer {
name: "relu55"
type: "ReLU"
bottom: "batch_norm_blob55"
top: "relu_blob55"
}
layer {
name: "conv56"
type: "Convolution"
bottom: "relu_blob55"
top: "conv_blob56"
convolution_param {
num_output: 160
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm56"
type: "BatchNorm"
bottom: "conv_blob56"
top: "batch_norm_blob56"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale56"
type: "Scale"
bottom: "batch_norm_blob56"
top: "batch_norm_blob56"
scale_param {
bias_term: true
}
}
layer {
name: "relu56"
type: "ReLU"
bottom: "batch_norm_blob56"
top: "relu_blob56"
}
layer {
name: "conv57"
type: "Convolution"
bottom: "relu_blob56"
top: "conv_blob57"
convolution_param {
num_output: 160
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm57"
type: "BatchNorm"
bottom: "conv_blob57"
top: "batch_norm_blob57"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale57"
type: "Scale"
bottom: "batch_norm_blob57"
top: "batch_norm_blob57"
scale_param {
bias_term: true
}
}
layer {
name: "relu57"
type: "ReLU"
bottom: "batch_norm_blob57"
top: "relu_blob57"
}
layer {
name: "conv58"
type: "Convolution"
bottom: "relu_blob57"
top: "conv_blob58"
convolution_param {
num_output: 160
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm58"
type: "BatchNorm"
bottom: "conv_blob58"
top: "batch_norm_blob58"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale58"
type: "Scale"
bottom: "batch_norm_blob58"
top: "batch_norm_blob58"
scale_param {
bias_term: true
}
}
layer {
name: "relu58"
type: "ReLU"
bottom: "batch_norm_blob58"
top: "relu_blob58"
}
layer {
name: "conv59"
type: "Convolution"
bottom: "relu_blob58"
top: "conv_blob59"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm59"
type: "BatchNorm"
bottom: "conv_blob59"
top: "batch_norm_blob59"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale59"
type: "Scale"
bottom: "batch_norm_blob59"
top: "batch_norm_blob59"
scale_param {
bias_term: true
}
}
layer {
name: "relu59"
type: "ReLU"
bottom: "batch_norm_blob59"
top: "relu_blob59"
}
layer {
name: "ave_pool6"
type: "Pooling"
bottom: "cat_blob6"
top: "ave_pool_blob6"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv60"
type: "Convolution"
bottom: "ave_pool_blob6"
top: "conv_blob60"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm60"
type: "BatchNorm"
bottom: "conv_blob60"
top: "batch_norm_blob60"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale60"
type: "Scale"
bottom: "batch_norm_blob60"
top: "batch_norm_blob60"
scale_param {
bias_term: true
}
}
layer {
name: "relu60"
type: "ReLU"
bottom: "batch_norm_blob60"
top: "relu_blob60"
}
layer {
name: "cat7"
type: "Concat"
bottom: "relu_blob51"
bottom: "relu_blob54"
bottom: "relu_blob59"
bottom: "relu_blob60"
top: "cat_blob7"
concat_param {
axis: 1
}
}
layer {
name: "conv61"
type: "Convolution"
bottom: "cat_blob7"
top: "conv_blob61"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm61"
type: "BatchNorm"
bottom: "conv_blob61"
top: "batch_norm_blob61"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale61"
type: "Scale"
bottom: "batch_norm_blob61"
top: "batch_norm_blob61"
scale_param {
bias_term: true
}
}
layer {
name: "relu61"
type: "ReLU"
bottom: "batch_norm_blob61"
top: "relu_blob61"
}
layer {
name: "conv62"
type: "Convolution"
bottom: "cat_blob7"
top: "conv_blob62"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm62"
type: "BatchNorm"
bottom: "conv_blob62"
top: "batch_norm_blob62"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale62"
type: "Scale"
bottom: "batch_norm_blob62"
top: "batch_norm_blob62"
scale_param {
bias_term: true
}
}
layer {
name: "relu62"
type: "ReLU"
bottom: "batch_norm_blob62"
top: "relu_blob62"
}
layer {
name: "conv63"
type: "Convolution"
bottom: "relu_blob62"
top: "conv_blob63"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm63"
type: "BatchNorm"
bottom: "conv_blob63"
top: "batch_norm_blob63"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale63"
type: "Scale"
bottom: "batch_norm_blob63"
top: "batch_norm_blob63"
scale_param {
bias_term: true
}
}
layer {
name: "relu63"
type: "ReLU"
bottom: "batch_norm_blob63"
top: "relu_blob63"
}
layer {
name: "conv64"
type: "Convolution"
bottom: "relu_blob63"
top: "conv_blob64"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm64"
type: "BatchNorm"
bottom: "conv_blob64"
top: "batch_norm_blob64"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale64"
type: "Scale"
bottom: "batch_norm_blob64"
top: "batch_norm_blob64"
scale_param {
bias_term: true
}
}
layer {
name: "relu64"
type: "ReLU"
bottom: "batch_norm_blob64"
top: "relu_blob64"
}
layer {
name: "conv65"
type: "Convolution"
bottom: "cat_blob7"
top: "conv_blob65"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm65"
type: "BatchNorm"
bottom: "conv_blob65"
top: "batch_norm_blob65"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale65"
type: "Scale"
bottom: "batch_norm_blob65"
top: "batch_norm_blob65"
scale_param {
bias_term: true
}
}
layer {
name: "relu65"
type: "ReLU"
bottom: "batch_norm_blob65"
top: "relu_blob65"
}
layer {
name: "conv66"
type: "Convolution"
bottom: "relu_blob65"
top: "conv_blob66"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm66"
type: "BatchNorm"
bottom: "conv_blob66"
top: "batch_norm_blob66"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale66"
type: "Scale"
bottom: "batch_norm_blob66"
top: "batch_norm_blob66"
scale_param {
bias_term: true
}
}
layer {
name: "relu66"
type: "ReLU"
bottom: "batch_norm_blob66"
top: "relu_blob66"
}
layer {
name: "conv67"
type: "Convolution"
bottom: "relu_blob66"
top: "conv_blob67"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm67"
type: "BatchNorm"
bottom: "conv_blob67"
top: "batch_norm_blob67"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale67"
type: "Scale"
bottom: "batch_norm_blob67"
top: "batch_norm_blob67"
scale_param {
bias_term: true
}
}
layer {
name: "relu67"
type: "ReLU"
bottom: "batch_norm_blob67"
top: "relu_blob67"
}
layer {
name: "conv68"
type: "Convolution"
bottom: "relu_blob67"
top: "conv_blob68"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm68"
type: "BatchNorm"
bottom: "conv_blob68"
top: "batch_norm_blob68"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale68"
type: "Scale"
bottom: "batch_norm_blob68"
top: "batch_norm_blob68"
scale_param {
bias_term: true
}
}
layer {
name: "relu68"
type: "ReLU"
bottom: "batch_norm_blob68"
top: "relu_blob68"
}
layer {
name: "conv69"
type: "Convolution"
bottom: "relu_blob68"
top: "conv_blob69"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm69"
type: "BatchNorm"
bottom: "conv_blob69"
top: "batch_norm_blob69"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale69"
type: "Scale"
bottom: "batch_norm_blob69"
top: "batch_norm_blob69"
scale_param {
bias_term: true
}
}
layer {
name: "relu69"
type: "ReLU"
bottom: "batch_norm_blob69"
top: "relu_blob69"
}
layer {
name: "ave_pool7"
type: "Pooling"
bottom: "cat_blob7"
top: "ave_pool_blob7"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv70"
type: "Convolution"
bottom: "ave_pool_blob7"
top: "conv_blob70"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm70"
type: "BatchNorm"
bottom: "conv_blob70"
top: "batch_norm_blob70"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale70"
type: "Scale"
bottom: "batch_norm_blob70"
top: "batch_norm_blob70"
scale_param {
bias_term: true
}
}
layer {
name: "relu70"
type: "ReLU"
bottom: "batch_norm_blob70"
top: "relu_blob70"
}
layer {
name: "cat8"
type: "Concat"
bottom: "relu_blob61"
bottom: "relu_blob64"
bottom: "relu_blob69"
bottom: "relu_blob70"
top: "cat_blob8"
concat_param {
axis: 1
}
}
layer {
name: "conv71"
type: "Convolution"
bottom: "cat_blob8"
top: "conv_blob71"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm71"
type: "BatchNorm"
bottom: "conv_blob71"
top: "batch_norm_blob71"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale71"
type: "Scale"
bottom: "batch_norm_blob71"
top: "batch_norm_blob71"
scale_param {
bias_term: true
}
}
layer {
name: "relu71"
type: "ReLU"
bottom: "batch_norm_blob71"
top: "relu_blob71"
}
layer {
name: "conv72"
type: "Convolution"
bottom: "relu_blob71"
top: "conv_blob72"
convolution_param {
num_output: 320
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 2
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm72"
type: "BatchNorm"
bottom: "conv_blob72"
top: "batch_norm_blob72"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale72"
type: "Scale"
bottom: "batch_norm_blob72"
top: "batch_norm_blob72"
scale_param {
bias_term: true
}
}
layer {
name: "relu72"
type: "ReLU"
bottom: "batch_norm_blob72"
top: "relu_blob72"
}
layer {
name: "conv73"
type: "Convolution"
bottom: "cat_blob8"
top: "conv_blob73"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm73"
type: "BatchNorm"
bottom: "conv_blob73"
top: "batch_norm_blob73"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale73"
type: "Scale"
bottom: "batch_norm_blob73"
top: "batch_norm_blob73"
scale_param {
bias_term: true
}
}
layer {
name: "relu73"
type: "ReLU"
bottom: "batch_norm_blob73"
top: "relu_blob73"
}
layer {
name: "conv74"
type: "Convolution"
bottom: "relu_blob73"
top: "conv_blob74"
convolution_param {
num_output: 192
bias_term: false
pad: 0
pad: 3
kernel_size: 1
kernel_size: 7
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm74"
type: "BatchNorm"
bottom: "conv_blob74"
top: "batch_norm_blob74"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale74"
type: "Scale"
bottom: "batch_norm_blob74"
top: "batch_norm_blob74"
scale_param {
bias_term: true
}
}
layer {
name: "relu74"
type: "ReLU"
bottom: "batch_norm_blob74"
top: "relu_blob74"
}
layer {
name: "conv75"
type: "Convolution"
bottom: "relu_blob74"
top: "conv_blob75"
convolution_param {
num_output: 192
bias_term: false
pad: 3
pad: 0
kernel_size: 7
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm75"
type: "BatchNorm"
bottom: "conv_blob75"
top: "batch_norm_blob75"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale75"
type: "Scale"
bottom: "batch_norm_blob75"
top: "batch_norm_blob75"
scale_param {
bias_term: true
}
}
layer {
name: "relu75"
type: "ReLU"
bottom: "batch_norm_blob75"
top: "relu_blob75"
}
layer {
name: "conv76"
type: "Convolution"
bottom: "relu_blob75"
top: "conv_blob76"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 3
group: 1
stride: 2
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm76"
type: "BatchNorm"
bottom: "conv_blob76"
top: "batch_norm_blob76"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale76"
type: "Scale"
bottom: "batch_norm_blob76"
top: "batch_norm_blob76"
scale_param {
bias_term: true
}
}
layer {
name: "relu76"
type: "ReLU"
bottom: "batch_norm_blob76"
top: "relu_blob76"
}
layer {
name: "max_pool4"
type: "Pooling"
bottom: "cat_blob8"
top: "max_pool_blob4"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "cat9"
type: "Concat"
bottom: "relu_blob72"
bottom: "relu_blob76"
bottom: "max_pool_blob4"
top: "cat_blob9"
concat_param {
axis: 1
}
}
layer {
name: "conv77"
type: "Convolution"
bottom: "cat_blob9"
top: "conv_blob77"
convolution_param {
num_output: 320
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm77"
type: "BatchNorm"
bottom: "conv_blob77"
top: "batch_norm_blob77"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale77"
type: "Scale"
bottom: "batch_norm_blob77"
top: "batch_norm_blob77"
scale_param {
bias_term: true
}
}
layer {
name: "relu77"
type: "ReLU"
bottom: "batch_norm_blob77"
top: "relu_blob77"
}
layer {
name: "conv78"
type: "Convolution"
bottom: "cat_blob9"
top: "conv_blob78"
convolution_param {
num_output: 384
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm78"
type: "BatchNorm"
bottom: "conv_blob78"
top: "batch_norm_blob78"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale78"
type: "Scale"
bottom: "batch_norm_blob78"
top: "batch_norm_blob78"
scale_param {
bias_term: true
}
}
layer {
name: "relu78"
type: "ReLU"
bottom: "batch_norm_blob78"
top: "relu_blob78"
}
layer {
name: "conv79"
type: "Convolution"
bottom: "relu_blob78"
top: "conv_blob79"
convolution_param {
num_output: 384
bias_term: false
pad: 0
pad: 1
kernel_size: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm79"
type: "BatchNorm"
bottom: "conv_blob79"
top: "batch_norm_blob79"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale79"
type: "Scale"
bottom: "batch_norm_blob79"
top: "batch_norm_blob79"
scale_param {
bias_term: true
}
}
layer {
name: "relu79"
type: "ReLU"
bottom: "batch_norm_blob79"
top: "relu_blob79"
}
layer {
name: "conv80"
type: "Convolution"
bottom: "relu_blob78"
top: "conv_blob80"
convolution_param {
num_output: 384
bias_term: false
pad: 1
pad: 0
kernel_size: 3
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm80"
type: "BatchNorm"
bottom: "conv_blob80"
top: "batch_norm_blob80"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale80"
type: "Scale"
bottom: "batch_norm_blob80"
top: "batch_norm_blob80"
scale_param {
bias_term: true
}
}
layer {
name: "relu80"
type: "ReLU"
bottom: "batch_norm_blob80"
top: "relu_blob80"
}
layer {
name: "cat10"
type: "Concat"
bottom: "relu_blob79"
bottom: "relu_blob80"
top: "cat_blob10"
concat_param {
axis: 1
}
}
layer {
name: "conv81"
type: "Convolution"
bottom: "cat_blob9"
top: "conv_blob81"
convolution_param {
num_output: 448
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm81"
type: "BatchNorm"
bottom: "conv_blob81"
top: "batch_norm_blob81"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale81"
type: "Scale"
bottom: "batch_norm_blob81"
top: "batch_norm_blob81"
scale_param {
bias_term: true
}
}
layer {
name: "relu81"
type: "ReLU"
bottom: "batch_norm_blob81"
top: "relu_blob81"
}
layer {
name: "conv82"
type: "Convolution"
bottom: "relu_blob81"
top: "conv_blob82"
convolution_param {
num_output: 384
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm82"
type: "BatchNorm"
bottom: "conv_blob82"
top: "batch_norm_blob82"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale82"
type: "Scale"
bottom: "batch_norm_blob82"
top: "batch_norm_blob82"
scale_param {
bias_term: true
}
}
layer {
name: "relu82"
type: "ReLU"
bottom: "batch_norm_blob82"
top: "relu_blob82"
}
layer {
name: "conv83"
type: "Convolution"
bottom: "relu_blob82"
top: "conv_blob83"
convolution_param {
num_output: 384
bias_term: false
pad: 0
pad: 1
kernel_size: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm83"
type: "BatchNorm"
bottom: "conv_blob83"
top: "batch_norm_blob83"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale83"
type: "Scale"
bottom: "batch_norm_blob83"
top: "batch_norm_blob83"
scale_param {
bias_term: true
}
}
layer {
name: "relu83"
type: "ReLU"
bottom: "batch_norm_blob83"
top: "relu_blob83"
}
layer {
name: "conv84"
type: "Convolution"
bottom: "relu_blob82"
top: "conv_blob84"
convolution_param {
num_output: 384
bias_term: false
pad: 1
pad: 0
kernel_size: 3
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm84"
type: "BatchNorm"
bottom: "conv_blob84"
top: "batch_norm_blob84"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale84"
type: "Scale"
bottom: "batch_norm_blob84"
top: "batch_norm_blob84"
scale_param {
bias_term: true
}
}
layer {
name: "relu84"
type: "ReLU"
bottom: "batch_norm_blob84"
top: "relu_blob84"
}
layer {
name: "cat11"
type: "Concat"
bottom: "relu_blob83"
bottom: "relu_blob84"
top: "cat_blob11"
concat_param {
axis: 1
}
}
layer {
name: "ave_pool8"
type: "Pooling"
bottom: "cat_blob9"
top: "ave_pool_blob8"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv85"
type: "Convolution"
bottom: "ave_pool_blob8"
top: "conv_blob85"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm85"
type: "BatchNorm"
bottom: "conv_blob85"
top: "batch_norm_blob85"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale85"
type: "Scale"
bottom: "batch_norm_blob85"
top: "batch_norm_blob85"
scale_param {
bias_term: true
}
}
layer {
name: "relu85"
type: "ReLU"
bottom: "batch_norm_blob85"
top: "relu_blob85"
}
layer {
name: "cat12"
type: "Concat"
bottom: "relu_blob77"
bottom: "cat_blob10"
bottom: "cat_blob11"
bottom: "relu_blob85"
top: "cat_blob12"
concat_param {
axis: 1
}
}
layer {
name: "conv86"
type: "Convolution"
bottom: "cat_blob12"
top: "conv_blob86"
convolution_param {
num_output: 320
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm86"
type: "BatchNorm"
bottom: "conv_blob86"
top: "batch_norm_blob86"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale86"
type: "Scale"
bottom: "batch_norm_blob86"
top: "batch_norm_blob86"
scale_param {
bias_term: true
}
}
layer {
name: "relu86"
type: "ReLU"
bottom: "batch_norm_blob86"
top: "relu_blob86"
}
layer {
name: "conv87"
type: "Convolution"
bottom: "cat_blob12"
top: "conv_blob87"
convolution_param {
num_output: 384
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm87"
type: "BatchNorm"
bottom: "conv_blob87"
top: "batch_norm_blob87"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale87"
type: "Scale"
bottom: "batch_norm_blob87"
top: "batch_norm_blob87"
scale_param {
bias_term: true
}
}
layer {
name: "relu87"
type: "ReLU"
bottom: "batch_norm_blob87"
top: "relu_blob87"
}
layer {
name: "conv88"
type: "Convolution"
bottom: "relu_blob87"
top: "conv_blob88"
convolution_param {
num_output: 384
bias_term: false
pad: 0
pad: 1
kernel_size: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm88"
type: "BatchNorm"
bottom: "conv_blob88"
top: "batch_norm_blob88"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale88"
type: "Scale"
bottom: "batch_norm_blob88"
top: "batch_norm_blob88"
scale_param {
bias_term: true
}
}
layer {
name: "relu88"
type: "ReLU"
bottom: "batch_norm_blob88"
top: "relu_blob88"
}
layer {
name: "conv89"
type: "Convolution"
bottom: "relu_blob87"
top: "conv_blob89"
convolution_param {
num_output: 384
bias_term: false
pad: 1
pad: 0
kernel_size: 3
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm89"
type: "BatchNorm"
bottom: "conv_blob89"
top: "batch_norm_blob89"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale89"
type: "Scale"
bottom: "batch_norm_blob89"
top: "batch_norm_blob89"
scale_param {
bias_term: true
}
}
layer {
name: "relu89"
type: "ReLU"
bottom: "batch_norm_blob89"
top: "relu_blob89"
}
layer {
name: "cat13"
type: "Concat"
bottom: "relu_blob88"
bottom: "relu_blob89"
top: "cat_blob13"
concat_param {
axis: 1
}
}
layer {
name: "conv90"
type: "Convolution"
bottom: "cat_blob12"
top: "conv_blob90"
convolution_param {
num_output: 448
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm90"
type: "BatchNorm"
bottom: "conv_blob90"
top: "batch_norm_blob90"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale90"
type: "Scale"
bottom: "batch_norm_blob90"
top: "batch_norm_blob90"
scale_param {
bias_term: true
}
}
layer {
name: "relu90"
type: "ReLU"
bottom: "batch_norm_blob90"
top: "relu_blob90"
}
layer {
name: "conv91"
type: "Convolution"
bottom: "relu_blob90"
top: "conv_blob91"
convolution_param {
num_output: 384
bias_term: false
pad: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm91"
type: "BatchNorm"
bottom: "conv_blob91"
top: "batch_norm_blob91"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale91"
type: "Scale"
bottom: "batch_norm_blob91"
top: "batch_norm_blob91"
scale_param {
bias_term: true
}
}
layer {
name: "relu91"
type: "ReLU"
bottom: "batch_norm_blob91"
top: "relu_blob91"
}
layer {
name: "conv92"
type: "Convolution"
bottom: "relu_blob91"
top: "conv_blob92"
convolution_param {
num_output: 384
bias_term: false
pad: 0
pad: 1
kernel_size: 1
kernel_size: 3
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm92"
type: "BatchNorm"
bottom: "conv_blob92"
top: "batch_norm_blob92"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale92"
type: "Scale"
bottom: "batch_norm_blob92"
top: "batch_norm_blob92"
scale_param {
bias_term: true
}
}
layer {
name: "relu92"
type: "ReLU"
bottom: "batch_norm_blob92"
top: "relu_blob92"
}
layer {
name: "conv93"
type: "Convolution"
bottom: "relu_blob91"
top: "conv_blob93"
convolution_param {
num_output: 384
bias_term: false
pad: 1
pad: 0
kernel_size: 3
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm93"
type: "BatchNorm"
bottom: "conv_blob93"
top: "batch_norm_blob93"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale93"
type: "Scale"
bottom: "batch_norm_blob93"
top: "batch_norm_blob93"
scale_param {
bias_term: true
}
}
layer {
name: "relu93"
type: "ReLU"
bottom: "batch_norm_blob93"
top: "relu_blob93"
}
layer {
name: "cat14"
type: "Concat"
bottom: "relu_blob92"
bottom: "relu_blob93"
top: "cat_blob14"
concat_param {
axis: 1
}
}
layer {
name: "ave_pool9"
type: "Pooling"
bottom: "cat_blob12"
top: "ave_pool_blob9"
pooling_param {
pool: AVE
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: "conv94"
type: "Convolution"
bottom: "ave_pool_blob9"
top: "conv_blob94"
convolution_param {
num_output: 192
bias_term: false
pad: 0
kernel_size: 1
group: 1
stride: 1
weight_filler {
type: "xavier"
}
dilation: 1
}
}
layer {
name: "batch_norm94"
type: "BatchNorm"
bottom: "conv_blob94"
top: "batch_norm_blob94"
batch_norm_param {
use_global_stats: true
eps: 0.0010000000474974513
}
}
layer {
name: "bn_scale94"
type: "Scale"
bottom: "batch_norm_blob94"
top: "batch_norm_blob94"
scale_param {
bias_term: true
}
}
layer {
name: "relu94"
type: "ReLU"
bottom: "batch_norm_blob94"
top: "relu_blob94"
}
layer {
name: "cat15"
type: "Concat"
bottom: "relu_blob86"
bottom: "cat_blob13"
bottom: "cat_blob14"
bottom: "relu_blob94"
top: "cat_blob15"
concat_param {
axis: 1
}
}
layer {
name: "ave_pool10"
type: "Pooling"
bottom: "cat_blob15"
top: "ave_pool_blob10"
pooling_param {
pool: AVE
kernel_size: 8
stride: 8
}
}
layer {
name: "dropout1"
type: "Dropout"
bottom: "ave_pool_blob10"
top: "ave_pool_blob10"
include {
phase: TRAIN
}
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "view1"
type: "Reshape"
bottom: "ave_pool_blob10"
top: "view_blob1"
reshape_param {
shape {
dim: 0
dim: -1
}
}
}
layer {
name: "fc1"
type: "InnerProduct"
bottom: "view_blob1"
top: "fc_blob1"
inner_product_param {
num_output: 1000
bias_term: true
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}

layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc_blob1"
bottom: "label"
top: "loss"
}
layer {
name: "loss/top-1"
type: "Accuracy"
bottom: "fc_blob1"
bottom: "label"
top: "loss/top-1"
include {
phase: TEST
}
}
layer {
name: "acc/top-5"
type: "Accuracy"
bottom: "fc_blob1"
bottom: "label"
top: "acc/top-5"
include {
phase: TEST
}
accuracy_param {
top_k: 5
}
}
`

Seresnext50转换不了

在out = model.forward(inputs)这一步卡住了
layer0.conv1
conv14 was added to layers
2925302473856:conv_blob14 was added to blobs
Add blob conv_blob14 : torch.Size([1, 64, 75, 75])
2925302216384:blob3 getting
2925302473856:conv_blob14 getting
layer0.bn1
2925302473856:conv_blob14 getting
batch_norm11 was added to layers
2925302475944:batch_norm_blob11 was added to blobs
Add blob batch_norm_blob11 : torch.Size([1, 64, 75, 75])
bn_scale11 was added to layers
layer0.relu1
2925302475944:batch_norm_blob11 getting
relu7 was added to layers
2925302475944:relu_blob7 was added to blobs
Add blob relu_blob7 : torch.Size([1, 64, 75, 75])
2925302475944:relu_blob7 getting
layer0.pool
max_pool3 was added to layers
2925302479320:max_pool_blob3 was added to blobs
Add blob max_pool_blob3 : torch.Size([1, 64, 37, 37])
2925302475944:relu_blob7 getting
layer1.0.conv1
conv15 was added to layers
2925302509640:conv_blob15 was added to blobs
Add blob conv_blob15 : torch.Size([1, 128, 37, 37])
2925302479320:max_pool_blob3 getting
2925302509640:conv_blob15 getting
layer1.0.bn1
2925302509640:conv_blob15 getting
batch_norm12 was added to layers
2925302197992:batch_norm_blob12 was added to blobs
Add blob batch_norm_blob12 : torch.Size([1, 128, 37, 37])
bn_scale12 was added to layers
layer1.0.relu
2925302197992:batch_norm_blob12 getting
relu8 was added to layers
2925302197992:relu_blob8 was added to blobs
Add blob relu_blob8 : torch.Size([1, 128, 37, 37])
2925302197992:relu_blob8 getting
layer1.0.conv2
conv16 was added to layers
2925302511584:conv_blob16 was added to blobs
Add blob conv_blob16 : torch.Size([1, 128, 37, 37])
2925302197992:relu_blob8 getting
2925302511584:conv_blob16 getting
layer1.0.bn2
2925302511584:conv_blob16 getting
batch_norm13 was added to layers
2925302513456:batch_norm_blob13 was added to blobs
Add blob batch_norm_blob13 : torch.Size([1, 128, 37, 37])
bn_scale13 was added to layers
layer1.0.relu
2925302513456:batch_norm_blob13 getting
relu9 was added to layers
2925302513456:relu_blob9 was added to blobs
Add blob relu_blob9 : torch.Size([1, 128, 37, 37])
2925302513456:relu_blob9 getting
layer1.0.conv3
conv17 was added to layers
2925588200688:conv_blob17 was added to blobs
Add blob conv_blob17 : torch.Size([1, 256, 37, 37])
2925302513456:relu_blob9 getting
2925588200688:conv_blob17 getting
layer1.0.bn3
2925588200688:conv_blob17 getting
batch_norm14 was added to layers
2925588226840:batch_norm_blob14 was added to blobs
Add blob batch_norm_blob14 : torch.Size([1, 256, 37, 37])
bn_scale14 was added to layers
layer1.0.downsample.0
conv18 was added to layers
2925588226912:conv_blob18 was added to blobs
Add blob conv_blob18 : torch.Size([1, 256, 37, 37])
2925302479320:max_pool_blob3 getting
2925588226912:conv_blob18 getting
layer1.0.downsample.1
2925588226912:conv_blob18 getting
batch_norm15 was added to layers
2925302510648:batch_norm_blob15 was added to blobs
Add blob batch_norm_blob15 : torch.Size([1, 256, 37, 37])
bn_scale15 was added to layers
layer1.0.se_module.fc1
conv19 was added to layers
2925302217248:conv_blob19 was added to blobs
Add blob conv_blob19 : torch.Size([1, 16, 1, 1])


KeyError Traceback (most recent call last)
in ()
----> 1 out = model.forward(inputs)

~\PytorchToCaffe-master\model\models.py in forward(self, x)
386
387 def forward(self, x):
--> 388 x = self.features(x)
389 x = self.logits(x)
390 return x

~\PytorchToCaffe-master\model\models.py in features(self, x)
371 def features(self, x):
372 x = self.layer0(x)
--> 373 x = self.layer1(x)
374 x = self.layer2(x)
375 x = self.layer3(x)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
---> 92 input = module(input)
93 return input
94

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

~\PytorchToCaffe-master\model\models.py in forward(self, x)
68 residual = self.downsample(x)
69
---> 70 out = self.se_module(out) + residual
71 out = self.relu(out)
72

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

~\PytorchToCaffe-master\model\models.py in forward(self, x)
40 module_input = x
41 x = self.avg_pool(x)
---> 42 x = self.fc1(x)
43 x = self.relu(x)
44 x = self.fc2(x)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
318 def forward(self, input):
319 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 320 self.padding, self.dilation, self.groups)
321
322

~\PytorchToCaffe-master\pytorch_to_caffe.py in call(self, *args, **kwargs)
532 print(layer_names[layer])
533 break
--> 534 out=self.obj(self.raw,*args,**kwargs)
535 # if isinstance(out,Variable):
536 # out=[out]

~\PytorchToCaffe-master\pytorch_to_caffe.py in _conv2d(raw, input, weight, bias, stride, padding, dilation, groups)
101 log.add_blobs([x],name='conv_blob')
102 layer=caffe_net.Layer_param(name=name, type='Convolution',
--> 103 bottom=[log.blobs(input)], top=[log.blobs(x)])
104 layer.conv_param(x.size()[1],weight.size()[2:],stride=_pair(stride),
105 pad=_pair(padding),dilation=_pair(dilation),bias_term=bias is not None,groups=groups)

~\PytorchToCaffe-master\pytorch_to_caffe.py in blobs(self, var)
86 var=id(var)
87 if self.debug:
---> 88 print("{}:{} getting".format(var, self._blobs[var]))
89 try:
90 return self._blobs[var]

~\PytorchToCaffe-master\pytorch_to_caffe.py in getitem(self, key)
29 self.data[key]=value
30 def getitem(self, key):
---> 31 return self.data[key]
32 def len(self):
33 return len(self.data)

KeyError: 2925302175928

KeyError when converting resnet18 from pytorch to caffe

Thank you for your nice work! I encoutered some problems when converting resnet18 from pytorch to caffe. I just modified the example/resnet_pytorch_2_caffe.py to convert pretrained model as following

import sys
sys.path.insert(0,'.')
import torch
from torch.autograd import Variable
from torchvision.models import resnet
import pytorch_to_caffe

if __name__=='__main__':
    name='resnet18'
    resnet18=resnet.resnet18(pretrained=True)
    #checkpoint = torch.load("/home/shining/Downloads/resnet18-5c106cde.pth")
    
    #resnet18.load_state_dict(checkpoint)
    resnet18.eval()
    input=torch.ones([1,3,224,224])
    pytorch_to_caffe.trans_net(resnet18,input,name)
    pytorch_to_caffe.save_prototxt('{}.prototxt'.format(name))
    pytorch_to_caffe.save_caffemodel('{}.caffemodel'.format(name))

But I got the KeyError prompt like following.

Add blob       add_blob8       : torch.Size([1, 512, 7, 7])
140475289706360:batch_norm_blob20 getting
140475289705856:relu_blob15 getting
layer4.1.relu
140475289706216:add_blob8 getting
relu17 was added to layers
140475289706216:relu_blob17 was added to blobs
Add blob      relu_blob17      : torch.Size([1, 512, 7, 7])
140475289706216:relu_blob17 getting
view1 was added to layers
140475191369800:view_blob1 was added to blobs
Add blob       view_blob1      : torch.Size([1, 512])
Traceback (most recent call last):
  File "example/resnet_pytorch_2_caffe.py", line 16, in <module>
    pytorch_to_caffe.trans_net(resnet18,input,name)
  File "./pytorch_to_caffe.py", line 612, in trans_net
    out = net.forward(input_var)
  File "/root/anaconda3/envs/torch10/lib/python3.6/site-packages/torchvision/models/resnet.py", line 161, in forward
    x = x.view(x.size(0), -1)
  File "./pytorch_to_caffe.py", line 410, in _view
    bottom=[log.blobs(input)],top=top_blobs)
  File "./pytorch_to_caffe.py", line 88, in blobs
    print("{}:{} getting".format(var, self._blobs[var]))
  File "./pytorch_to_caffe.py", line 31, in __getitem__
    return self.data[key]
KeyError: 140475289706432

I guess maybe there are some bugs when convert torch.Tensor.view method to caffe.

May you check it later? Thanks a lot.

torch.cat不支持吗?unexpected keyword argument 'dim'

File "/home/shankun.shankunwan/CenterNet/src/lib/models/networks/DCNv2/dcn_v2.py", line 173, in forward
offset = torch.cat((o1, o2), dim=1)
File "/home/shankun.shankunwan/PytorchToCaffe/pytorch_to_caffe.py", line 534, in call
out=self.obj(self.raw,*args,**kwargs)
TypeError: _cat() got an unexpected keyword argument 'dim'

how can I get the code understand?

Thanks, The code is great!!! I want to make some custom layer support, but I do not know what is the converting scheme? Can you simply give the solution for converting PyTorch model to Caffe model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.