Giter VIP home page Giter VIP logo

tensorflowlite-flexdelegate's Introduction

TensorflowLite-flexdelegate

November 27, 2019, under construction.

TensorFlow Lite will continue to have TensorFlow Lite builtin ops optimized for mobile and embedded devices. However, TensorFlow Lite models can now use a subset of TensorFlow ops when TFLite builtin ops are not sufficient.

1. Environment

2. Models to be tested

No. Model name Note
1 multi_add_flex Tensorflow official tutorial model.
2 ENet Lightweight semantic segmentation model.
3 Learning-to-See-Moving-Objects-in-the-Dark Learning to See Moving Objects in the Dark. ICCV 2019.

3. How to build Tensorflow Lite shared library with Flex Delegate enabled

3-1. x86_64 machine

$ cd ~
$ sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev \
       libatlas-base-dev libopenblas-dev openjdk-8-jdk
$ sudo pip3 install keras_applications==1.0.8 --no-deps
$ sudo pip3 install keras_preprocessing==1.1.0 --no-deps
$ sudo pip3 install h5py==2.9.0
$ sudo apt-get install -y openmpi-bin libopenmpi-dev
$ sudo -H pip3 install -U --user six numpy wheel mock

$ cd ~
$ git clone https://github.com/PINTO0309/Bazel_bin.git
$ ./Bazel_bin/0.26.1/Ubuntu1604_x86_64/install.sh

$ git clone -b v2.0.0 https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ ./configure

WARNING: ignoring LD_PRELOAD in environment.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.26.1- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3

Found possible Python library paths:
  /usr/local/lib
  /usr/local/lib/python3.6/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python3.6/dist-packages]
/usr/local/lib/python3.6/dist-packages
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: 

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
    --config=gdr            # Build with GDR support.
    --config=verbs          # Build with libverbs support.
    --config=ngraph         # Build with Intel nGraph support.
    --config=numa           # Build with NUMA support.
    --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
    --config=v2             # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
    --config=noaws          # Disable AWS S3 filesystem support.
    --config=nogcp          # Disable GCP support.
    --config=nohdfs         # Disable HDFS support.
    --config=noignite       # Disable Apache Ignite support.
    --config=nokafka        # Disable Apache Kafka support.
    --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished
$ cd tensorflow/lite
$ nano BUILD
tflite_cc_shared_object(
    name = "libtensorflowlite.so",
    linkopts = select({
        "//tensorflow:macos": [
            "-Wl,-exported_symbols_list,$(location //tensorflow/lite:tflite_exported_symbols.lds)",
            "-Wl,-install_name,@rpath/libtensorflowlite.so",
        ],
        "//tensorflow:windows": [],
        "//conditions:default": [
            "-z defs",
            "-Wl,--version-script,$(location //tensorflow/lite:tflite_version_script.lds)",
        ],
    }),
    deps = [
        ":framework",
        ":tflite_exported_symbols.lds",
        ":tflite_version_script.lds",
        "//tensorflow/lite/kernels:builtin_ops",
        "//tensorflow/lite/delegates/flex:delegate",
    ],
)
$ nano tools/make/Makefile
BUILD_WITH_NNAPI=trueBUILD_WITH_NNAPI=false
$ sudo bazel build \
--config=monolithic \
--config=noaws \
--config=nohdfs \
--config=noignite \
--config=nokafka \
--config=nonccl \
--config=v2 \
--define=tflite_convert_with_select_tf_ops=true \
--define=with_select_tf_ops=true \
//tensorflow/lite:libtensorflowlite.so
$ sudo chmod 777 libtensorflowlite.so

3-2. armv7l machine

$ cd ~
$ sudo nano /etc/dphys-swapfile
CONF_SWAPFILE=2048
CONF_MAXSWAP=2048

$ sudo systemctl stop dphys-swapfile
$ sudo systemctl start dphys-swapfile

$ wget https://github.com/PINTO0309/Tensorflow-bin/raw/master/zram.sh
$ chmod 755 zram.sh
$ sudo mv zram.sh /etc/init.d/
$ sudo update-rc.d zram.sh defaults
$ sudo reboot

$ sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev libatlas-base-dev libopenblas-dev
$ sudo pip3 install keras_applications==1.0.8 --no-deps
$ sudo pip3 install keras_preprocessing==1.1.0 --no-deps
$ sudo pip3 install h5py==2.9.0
$ sudo apt-get install -y openmpi-bin libopenmpi-dev
$ sudo -H pip3 install -U --user six numpy wheel mock

$ cd ~
$ git clone https://github.com/PINTO0309/Bazel_bin.git
$ ./Bazel_bin/0.26.1/Raspbian_Debian_Buster_armhf/openjdk-8-jdk/install.sh

$ git clone -b v2.0.0 https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ ./configure

WARNING: ignoring LD_PRELOAD in environment.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.26.1- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3

Found possible Python library paths:
  /usr/local/lib
  /usr/local/lib/python3.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python3.7/dist-packages]
/usr/local/lib/python3.7/dist-packages
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: 

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
    --config=gdr            # Build with GDR support.
    --config=verbs          # Build with libverbs support.
    --config=ngraph         # Build with Intel nGraph support.
    --config=numa           # Build with NUMA support.
    --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
    --config=v2             # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
    --config=noaws          # Disable AWS S3 filesystem support.
    --config=nogcp          # Disable GCP support.
    --config=nohdfs         # Disable HDFS support.
    --config=noignite       # Disable Apache Ignite support.
    --config=nokafka        # Disable Apache Kafka support.
    --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished
$ cd tensorflow/lite
$ nano BUILD
tflite_cc_shared_object(
    name = "libtensorflowlite.so",
    linkopts = select({
        "//tensorflow:macos": [
            "-Wl,-exported_symbols_list,$(location //tensorflow/lite:tflite_exported_symbols.lds)",
            "-Wl,-install_name,@rpath/libtensorflowlite.so",
        ],
        "//tensorflow:windows": [],
        "//conditions:default": [
            "-z defs",
            "-Wl,--version-script,$(location //tensorflow/lite:tflite_version_script.lds)",
        ],
    }),
    deps = [
        ":framework",
        ":tflite_exported_symbols.lds",
        ":tflite_version_script.lds",
        "//tensorflow/lite/kernels:builtin_ops",
        "//tensorflow/lite/delegates/flex:delegate",
    ],
)
$ nano tools/make/Makefile
BUILD_WITH_NNAPI=trueBUILD_WITH_NNAPI=false
$ nano experimental/ruy/pack_arm.cc
"mov r0, 0\n""mov r0, #0\n"
$ sudo bazel --host_jvm_args=-Xmx512m build \
--config=monolithic \
--config=noaws \
--config=nohdfs \
--config=nonccl \
--config=v2 \
--define=tflite_convert_with_select_tf_ops=true \
--define=with_select_tf_ops=true \
--local_resources=4096.0,3.0,1.0 \
--copt=-mfpu=neon-vfpv4 \
--copt=-ftree-vectorize \
--copt=-funsafe-math-optimizations \
--copt=-ftree-loop-vectorize \
--copt=-fomit-frame-pointer \
--copt=-DRASPBERRY_PI \
--host_copt=-DRASPBERRY_PI \
--linkopt=-Wl,-latomic \
--host_linkopt=-Wl,-latomic \
//tensorflow/lite:libtensorflowlite.so
$ sudo chmod 777 libtensorflowlite.so

4. How to generate a Tensorflow Lite model file with Flex Delegate enabled + Weight quantization

4-1. ENet (Weight quantization enabled)

$ cd ~/tensorflow/tensorflow/lite/python
$ sudo bazel run \
--define=with_select_tf_ops=true \
tflite_convert -- \
--graph_def_file=enet.pb \
--output_file=enet.tflite \
--input_arrays=input \
--output_arrays=ENet/fullconv/BiasAdd,ENet/logits_to_softmax \
--target_ops=TFLITE_BUILTINS,SELECT_TF_OPS

4-2. Learning-to-See-Moving-Objects-in-the-Dark (Weight quantization disabled)

$ cd ~/tensorflow/tensorflow/lite/python
$ sudo bazel run \
--define=with_select_tf_ops=true \
tflite_convert -- \
--graph_def_file=lsmod.pb \
--output_file=lsmod.tflite \
--input_arrays=input \
--output_arrays=output \
--target_ops=TFLITE_BUILTINS,SELECT_TF_OPS \
--allow_custom_ops

5. HTML visualization of .tflite files

$ cd ~/tensorflow
$ sudo bazel run tensorflow/lite/tools:visualize -- \
  ~/TensorflowLite-flexdelegate/models/enet/enet.tflite \
  ~/TensorflowLite-flexdelegate/models/enet/enet.tflite.html

6. Pre-built shared library

6-1. For Ubuntu 18.04 x86_64

https://github.com/PINTO0309/TensorflowLite-flexdelegate/blob/master/so/2.0.0/download_x86-64_libtensorflowlite.so.sh

6-2. For Raspbian Buster armv7l + RaspberryPi3/4

https://github.com/PINTO0309/TensorflowLite-flexdelegate/blob/master/so/2.1.0/download_debian_buster_armhf_libtensorflowlite.so.sh

6-3. For Ubuntu 19.10 aarch64 + RaspberryPi4

https://github.com/PINTO0309/TensorflowLite-flexdelegate/blob/master/so/2.1.0/download_ubuntu_1910_aarch64_libtensorflowlite.so.sh

7. Reference articles

  1. Select TensorFlow operators to use in TensorFlow Lite
  2. Shared library libtensorflowlite.so cannot be found after building from source
  3. How to invoke the Flex delegate for tflite interpreters?
  4. iwatake2222 / CNN_NumberDetector
  5. PINTO0309 / Tensorflow-bin
  6. PINTO0309 / TensorflowLite-bin
  7. PINTO0309 / Bazel_bin
  8. Post-training quantization - Tensorflow official tutorial
  9. Post-training integer quantization - Tensorflow official tutorial
  10. post_training_integer_quant.ipynb
  11. convert the checkpoint to SavedModel
  12. tensorflow/models/official/r1/mnist/mnist.py
  13. tensorflowjs_converter: SavedModel file does not exist at:
  14. tf.compat.v1.lite.TFLiteConverter - Convert a TensorFlow model into output_format
  15. C++ API producing incorrect model metaparams
  16. TensorFlow Lite C++ API解説 - Qiita - ornew

tensorflowlite-flexdelegate's People

Contributors

dependabot[bot] avatar pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tensorflowlite-flexdelegate's Issues

Flex Ops

Hey pinto just wondering if you will have another look at flex ops.

https://github.com/google-research/google-research/blob/master/kws_streaming/experiments/kws_experiments_paper_12_labels.md#crnn

Google research have done a great repo for all state of art KWS for tensorflow lite and have been trying to work out how to get the above working with TFL as a CRNN I thought was naturally a steaming NN and slightly confused at the non-stream version as presume they are applying 1 sec images to a naturally stream model just for comparison and also what is crnn_state as why? But that is another question but there are a whole load of models that would be really cool to get going on a Pi.

Its also really interesting with the upcoming 2.5 as if I am reading right we should be able to use quantisation and flex ops and we could expect 2-3x perf with little accuracy loss over the full framework.

I was wondering if you could give it a try as so far my attempts have been a fail but much work has been added to flex ops with the biggest of quantisation in 2,5.

Is it of any interest as hoping so?

Also the models of https://github.com/google-research/google-research/blob/master/kws_streaming/experiments/kws_experiments_paper_12_labels.md#crnn have focused on best accuracy for comparison I was wondering if you had done anything with https://www.tensorflow.org/model_optimization ?

error building using bazel

sudo bazel build
--config=monolithic
--config=noaws
--config=nohdfs
--config=noignite
--config=nokafka
--config=nonccl
--config=v2
--define=tflite_convert_with_select_tf_ops=true
--define=with_select_tf_ops=true
//tensorflow/lite:libtensorflowlite.so

not working for tensorflow 2.3.0, python 3.6, bazel:3.6.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.