Giter VIP home page Giter VIP logo

openvx-model-compiler's Introduction

MIT licensed

OpenVX Neural Net Model Compiler & Optimizer

OpenVX Neural Net Model Compiler & Optimizer converts pre-trained neural network models to OpenVX runtime code for optimized inference.

Pre-trained models in ONNX, NNEF, & Caffe formats are supported by the model compiler & optimizer. The model compiler first converts the pre-trained models to AMD Neural Net Intermediate Representation (NNIR), once the model has been translated into AMD NNIR (AMD's internal open format), the Optimizer goes through the NNIR and applies various optimizations which would allow the model to be deployed on to target hardware most efficiently. Finally, AMD NNIR is converted into OpenVX C code, which could be compiled and deployed on any targeted AMD hardware.

OpenVX RunTime

OpenVX RunTime allows hundreds of different OpenVX and OpenCV interop vision functions to be directly added into the OpenVX C code generated by the model compiler & optimizer for preprocessing the input to the neural network model and post-processing the model results, hence allowing users to create an end to end solution to be deployed on any targeted AMD hardware.

Pre-requisites

  • Ubuntu 16.04/18.04 or CentOS 7.5/7.6
  • OpenVX - Install any conformat OpenVX Implementation

ONNX

  • numpy
  • onnx
% pip install onnx numpy

Note: ONNX Models are available at ONNX Model Zoo

NNEF

% pip install numpy

Note: NNEF Models are available at NNEF Model Zoo

Model Compiler & Optimizer Usage

Step 1 - Convert Pre-trained model to AMD NNIR

Caffe

To convert a pre-trained caffemodel into AMD NNIR model:

% python caffe_to_nnir.py <net.caffeModel> <nnirOutputFolder> --input-dims <n,c,h,w> [--verbose <0|1>]

ONNX

To convert an ONNX model into AMD NNIR model:

% python onnx_to_nnir.py <model.onnx> <nnirModelFolder> [OPTIONS]

OPTIONS:
	--input_dims n,c,h,w

NNEF

To convert a NNEF model into AMD NNIR model:

% python nnef_to_nnir.py <nnefInputFolder> <nnirOutputFolder>

Note: If you want to create NNEF models from pre-trained caffe or tensorflow models, use NNEF Converter or try NNEF models at NNEF Model Zoo

Step 2 - Apply Optimizations

To update batch size in AMD NNIR model:

% python nnir_update.py --batch-size <N> <nnirModelFolder> <nnirModelFolderN>

To fuse operations in AMD NNIR model (like batch normalization into convolution):

% python nnir_update.py --fuse-ops <1> <nnirModelFolderN> <nnirModelFolderFused>

To quantize the model to float 16

% python nnir_update.py --convert-fp16 <1> <nnirModelFolderN> <nnirModelFolderFused>

To workaround groups using slice and concat operations in AMD NNIR model:

% python nnir_update.py --slice-groups <1> <nnirModelFolderFused> <nnirModelFolderSliced>

Step 3 - Convert AMD NNIR to OpenVX C code

To convert an AMD NNIR model into OpenVX C code:

% python nnir_to_openvx.py --help

Usage: python nnir_to_openvx.py [OPTIONS] <nnirInputFolder> <outputFolder>

  OPTIONS:
    --argmax UINT8                    -- argmax at the end with 8-bit output
    --argmax UINT16                   -- argmax at the end with 16-bit output
    --argmax <fileNamePrefix>rgb.txt  -- argmax at the end with RGB color mapping using LUT
    --argmax <fileNamePrefix>rgba.txt -- argmax at the end with RGBA color mapping using LUT
    --help                            -- show this help message

  LUT File Format (RGB): 8-bit R G B values one per each label in text format
    R0 G0 B0
    R1 G1 B1
    ...

  LUT File Format (RGBA): 8-bit R G B A values one per each label in text format
    R0 G0 B0 A0
    R1 G1 B1 A1
    ...

Sample workflow for Model Compiler

Trained Caffe Model conversion to AMD NNIR to OpenVX Graph

  1. Convert net.caffemodel into NNIR model using the following command
         % python caffe_to_nnir.py <net.caffeModel> <nnirOutputFolder> --input-dims n,c,h,w [--verbose 0|1]
    
  2. Compile NNIR model into OpenVX C code with CMakelists.txt for compiling and building inference library
         % python nnir_to_openvx.py <nnirModelFolder> <nnirModelOutputFolder>
    
  3. cmake and make the project inside the nnirModelOutputFolder
         % cd nnirModelOutputFolder
         % cmake .
         % make
    
  4. Run anntest application for testing the inference with input and output tensor
         % ./anntest weights.bin
    
  5. The shared C library (libannmodule.so) can be used in any customer application

Examples for OpenVX C code generation

  • Generate OpenVX and test code that can be used dump and compare raw tensor data:
% python nnir_to_openvx.py nnirInputFolderFused openvxCodeFolder
% mkdir openvxCodeFolder/build
% cd openvxCodeFolder/build
% cmake ..
% make
% ./anntest

Note:

Usage: anntest <weights.bin> [<input-data-file(s)> [<output-data-file(s)>]]<--add ADD> <--multiply MULTIPLY>]

   <input-data-file>: is filename to initialize tensor
     .jpg or .png: decode and initialize for 3 channel tensors
         (use %04d in fileName to when batch-size > 1: batch index starts from 0)
     other: initialize tensor with raw data from the file

   <output-data-file>[,<reference-for-compare>,<maxErrorLimit>,<rmsErrorLimit>]:
     <referece-to-compare> is raw tensor data for comparision
     <maxErrorLimit> is max absolute error allowed
     <rmsErrorLimit> is max RMS error allowed
     <output-data-file> is filename for saving output tensor data
       '-' to ignore
       other: save raw tensor into the file
       
   <add>: input preprocessing factor [optional - default:[0,0,0]]
   
   <multiply>: input preprocessing factor [optional - default:[1,1,1]]

% ./anntest ../weights.bin input.f32 output.f32,reference.f32,1e-6,1e-9 --add -2.1,-2.07,-1.8 --multiply 0.017,0.017,0.017
...
  • Generate OpenVX and test code with argmax that can be used dump and compare 16-bit argmax output tensor:
% python nnir_to_openvx.py --argmax UINT16 nnirInputFolderFused openvxCodeFolder
% mkdir openvxCodeFolder/build
% cd openvxCodeFolder/build
% cmake ..
% make
% ./anntest

Note:

Usage: anntest <weights.bin> [<input-data-file(s)> [<output-data-file(s)>]]]

   <input-data-file>: is filename to initialize tensor
     .jpg or .png: decode and initialize for 3 channel tensors
         (use %04d in fileName to when batch-size > 1: batch index starts from 0)
     other: initialize tensor with raw data from the file

   <output-data-file>[,<reference-for-compare>,<percentErrorLimit>]:
     <referece-to-compare> is raw tensor data of argmax output for comparision
     <percentMismatchLimit> is max mismatch (percentage) allowed
     <output-data-file> is filename for saving output tensor data
       '-' to ignore
       other: save raw tensor into the file

% ./anntest ../weights.bin input-%04d.png output.u16,reference.u16,0.01
...
  • Generate OpenVX and test code with argmax and LUT that is designed for semantic segmentation use cases. You can dump output in raw format or PNGs and additionally compare with reference data in raw format.
% python nnir_to_openvx.py --argmax lut-rgb.txt nnirInputFolderFused openvxCodeFolder
% mkdir openvxCodeFolder/build
% cd openvxCodeFolder/build
% cmake ..
% make
% ./anntest

Note:

Usage: anntest <weights.bin> [<input-data-file(s)> [<output-data-file(s)>]]]

   <input-data-file>: is filename to initialize tensor
     .jpg or .png: decode and initialize for 3 channel tensors
         (use %04d in fileName to when batch-size > 1: batch index starts from 0)
     other: initialize tensor with raw data from the file

   <output-data-file>[,<reference-for-compare>,<percentErrorLimit>]:
     <referece-to-compare> is raw tensor data of LUT output for comparision
     <percentMismatchLimit> is max mismatch (percentage) allowed
     <output-data-file> is filename for saving output tensor data
       .png: save LUT output as PNG file(s)
         (use %04d in fileName when batch-size > 1: batch index starts from 0)
       '-' to ignore
       other: save raw tensor into the file

% ./anntest ../weights.bin input-%04d.png output.rgb,reference.rgb,0.01
...
% ./anntest ../weights.bin input-%04d.png output-%04d.png,reference.rgb,0.01
...

Test code with preprocessing add / multiply values to normalize the input tensor. Some models(e.g. Inception v4) require input tensor to be normalized. You can pass the preprocessing values using --add & --multiply option.

% ./anntest ../weights.bin input.f32 output.f32 --add -2.1,-2.07,-1.8 --multiply 0.017,0.017,0.017
...

Models & Operators currently supported

Models

Networks Caffe ONNX NNEF
AlexNet
Caffenet
DenseNet
Googlenet
Inception-V1
Inception-V2
Inception-V3
Inception-V4
MNIST
Mobilenet
MobilenetV2
ResNet-18
ResNet-34
ResNet-50
ResNet-101
ResNet-152
ResNetV2-18
ResNetV2-34
ResNetV2-50
ResNetV2-101
Squeezenet
Tiny-Yolo-V2
VGGNet-16
VGGNet-19
Yolo-V3
ZFNet

Note:

  • Currently supporting ONNX models with release 1.1 and release 1.3 tags

Operators

Layers Caffe ONNX NNEF
Add
Argmax
AveragePool
BatchNormalization
Cast
Clamp
Clip
Concat
Constant
Conv
ConvTranspose
Copy
Crop
CropAndResize
Deconv
DetectionOutput
Div
Dropout
Eltwise
Exp
Flatten
GEMM
GlobalAveragePool
InnerProduct
Interp
LeakyRelu
Linear
Log
LRN
Matmul
Max
MaxPool
MeanReduce
Min
Mul
MulAdd
Permute
PriorBox
Relu
Reshape
Shape
Sigmoid
Slice
Split
Softmax
SoftmaxWithLoss
Squeeze
Sub
Sum
Transpose
Unsqueeze
Upsample

Contribution

The OpenVX Model Compiler is contributed by AMD from their MIVisionX Toolkit.

Contributing to OpenVX Model Compiler

We welcome contributions to Model Compiler to extend the functionalities and add support to more layers and models. When contributing to this repository, please first discuss the changes you wish to make via issues and then submit a pull request.

openvx-model-compiler's People

Contributors

kiritigowda avatar lakshmikumar23 avatar lcskrishna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.