Giter VIP home page Giter VIP logo

tflite_micro_compiler's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tflite_micro_compiler's Issues

[BUG] Build fails

I think I'm following the build instructions but to no avail...

cmake.exe -DGET_TF_SRC=ON TF_TAG=v2.3.0 ..
make

results in many errors, starting with:

In file included from /cygdrive/c/Users/Jeff/Source/Repos/tflite_micro_compiler/build/_deps/tf-src/tensorflow/lite/core/api/flatbuffer_conversions.h:28,
from /cygdrive/c/Users/Jeff/Source/Repos/tflite_micro_compiler/build/_deps/tf-src/tensorflow/lite/core/api/flatbuffer_conversions.cc:16:
/cygdrive/c/Users/Jeff/Source/Repos/tflite_micro_compiler/build/_deps/tf-src/tensorflow/lite/schema/schema_generated.h: In member function ‘bool tflite::QuantizationParameters::Verify(flatbuffers::Verifier&) const’:
/cygdrive/c/Users/Jeff/Source/Repos/tflite_micro_compiler/build/_deps/tf-src/tensorflow/lite/schema/schema_generated.h:3406:32: error: no matching function for call to ‘tflite::QuantizationParameters::VerifyField<uint8_t>(flatbuffers::Verifier&, tflite::QuantizationParameters::FlatBuffersVTableOffset) const’
3406 | VerifyField<uint8_t>(verifier, VT_DETAILS_TYPE) &&
| ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~

Sorry if I'm missing something obvious.

Update tflite-micro-compiler to work with recent tflite-micro

Hi,

We at XMOS have been doing some work internally to move tflite-micro-compiler up to work with latest tflite-micro.

We would be happy to contribute the changes. It contains some hardware-specific changes at the moment. If a PR is of interest, we can put some effort into cleaning it up. Would that be of interest?

Best regards,
Deepak

Support visualC as a compiler

Especially empty dimensions and empty opdate create empty array declarations, this is not supported by visualC (version 12). In this case make the array at least 1 element long (use preprocessor macro to avoid this waste on other compilers)

Structure IntArray and FloatArray

const int hello_tensor_dimension0[3] = { 2, 1,1 };
const struct { int sz; float elem[1]; } hello_quant0_scale = { 1, { 0.0084758047014474869, } };

should become

template <int SZ, class T> struct Array { int sz; T elem[SZ]; };
const Array<2,int> hello_tensor_dimension0 = { 2, {1,1} };
const Array<1,float> hello_quant0_scale = { 1, {0.008475…} };

Bonus: Is there a way to avoid repeating the size while maintaining it being a compile time created structure (which ends up in a text section)? Some C++11 ctor magic with {}?

Provide compilers targeting stable tensorflow releases

I am quite sure there are people around who would prefer to use a stable version of tensorflow with the compiler.
We should branch off a r2.1, r2.2 and r2.3 branch, and revert the small adaptations to tf master, so that it works with the respective version of tensorflow.

compiler executable file not found

I have followed all the steps as described in the README.md to setup the project but I am unable to generate a compiler executable binary file in order to execute this command:

./compiler hello_world.tflite hello_compiled.cpp hello_

Plans to move support to tflite-micro project?

Is your feature request related to a problem? Please describe.

  1. The project is stuck at old version of tensorflow and also, separate tflite-micro project now exist for about a year.
    It would be great to move to it.

  2. Also, is it possible to make this project part of tflite-micro project itself?

Persistent buffers are aligned

The persistent buffers are aligned on 16 byte boundaries. Our current "FakeAllocatePersistentBuffer" doesn't account for this. This may cause issues for optimized kernels with alignment requirements.

Update required define

From SIGMICRO list:

If you explicitly have TF_LITE_STATIC_MEMORY defined in a Make or build config, please update that build define to TF_LITE_MICRO.

Error while generating compiler file

Hi, I am trying to execute this command make -f tensorflow/lite/micro/tools/make/Makefile hello_world_bin but it throws me this error:

aryan@Apples-MacBook-Pro tensorflow % make -f tensorflow/lite/micro/tools/make/Makefile hello_world_bin
tensorflow/lite/micro/tools/make/Makefile:297: warning: overriding recipe for target 'tensorflow/lite/micro/tools/make/downloads/ruy'
tensorflow/lite/micro/tools/make/Makefile:297: warning: ignoring old recipe for target 'tensorflow/lite/micro/tools/make/downloads/ruy'
tensorflow/lite/micro/tools/make/Makefile:297: warning: overriding recipe for target 'tensorflow/lite/micro/tools/make/downloads/person_model_grayscale'
tensorflow/lite/micro/tools/make/Makefile:297: warning: ignoring old recipe for target 'tensorflow/lite/micro/tools/make/downloads/person_model_grayscale'
objcopy tensorflow/lite/micro/tools/make/gen/osx_x86_64/bin/hello_world tensorflow/lite/micro/tools/make/gen/osx_x86_64/bin/hello_world.bin -O binary
make: objcopy: No such file or directory
make: *** [tensorflow/lite/micro/tools/make/Makefile:351: tensorflow/lite/micro/tools/make/gen/osx_x86_64/bin/hello_world.bin] Error 127

I have cloned the tensorflow's repository and switched to branch origin/r2.3 and trying to resolve the issue.

Let me know if there are any issues I have encountered while setting up.

Support architectures which don't zero bss

some ti compilers and gcc provide the option to not zero bss to speed up boot time on embedded hardware attribute ((section (".noinit")))
Provide an option to zero all of the structures (tensors, nodes, context)

TFL Custom Ops

Hi,

Thanks for creating this tool. This is a great idea and makes a lot of sense for embedded devices. I was wondering whether there is a plan to support TFL custom ops? I think in order to support custom ops, I need to pass a custom MicroOpResolver instance to the Compiler's class constructor and add a writeCustom method to the CodeWriter class? Other than these two changes, is there any other modification required?

Thanks

Inplace operator planning

Dear Rafael,

I thought more about reshape operators (and their kind), what if ...
if an operator's output has the same size as its first input and this input is only used by this single node (the one we investigate here), we assume that this operator can be done in-place and extend the lifetime of the tensor in the planning stage to the lifetime of the output tensor.
I don't know whether this saves memory, but it will for sure make some operations more cache friendly (and enable reshape to be a no-op under the right conditions).
What do you think?

(should we use a list of invalid operators or a list of valid operators, the default being the other case)

Generated examples don't compile with r2.4

I just found that Register_CONV_2D,QUANTIZE,FULLY_CONNECTED,SOFTMAX,DEPTHWISE_CONV_2D are not declared in micro_ops.h
I need to figure out how to fix the examples!

PS: Sorry, I did not fall off the edge of the world, I just spend most of my time on gitlab with the Rust written game veloren.net

[DISCUSSION] MobileNetV2 train and quantize

Hello!
I'm wondering how to train a MobileNetV2 and fully quantize the tflite model. I know that I can do it in EdgeImpulse but I want to change the architecture a bit.

I'm using the tf2.3 version and mobile net from tf.keras.applications.MobileNetV2. While converting compiling model I got an error

Didn't find op for builtin opcode 'SHAPE' version '1'

Failed to get registration from op code SHAPE

Failed starting model allocation.

AllocateTensors() failed
Could not set up compiler
I'm wondering how did you do that. I'll be very thankfull for any advice

Reduce code size by replacing linear code with const arrays and for

The size of the generated code can be further reduced by writing a const array containing node and tensor values and then using a for loop in init and invoke.
I believe the more nice variant is to use another structure to collect the values, then fill them one node/tensor per line.

[BUG]Compiler fails for mobilenet (memory corruption?)

When running make regenerate in the examples folder the compiled_mobilenet.cpp is broken. Interestingly a string is substituted (considered as a custom operator?), so I suspect some interesting memory corruption:

-  { (TfLiteIntArray*)&inputs12, (TfLiteIntArray*)&outputs12, const_cast<void*>(static_cast<const void*>(&opdata12)), OP_CONV_2D, },
+  { (TfLiteIntArray*)&inputs12, (TfLiteIntArray*)&outputs12, const_cast<void*>(static_cast<const void*>(&opdata12)), OP_ROMTensor_L0in, 0, },

No proper alignment on opdata

If an operator is unknown the opdata is dumped as an 8 bit array. But opdate should be 4 byte aligned on most architectures to correctly access 32 bit values inside (common).

Support custom operators

Plan:
Load a shared library implementing the operators (libtflite_micro_custom.so/.dll by default?).
Standardize on how to register them (which function to call for registering a single one (Register_{custom_name}?) from the compiled code
And which to call to register all for the allocation phase (extern "C" TfLiteStatus register_custom(tflite::ops::micro::AllOpsResolver*)).

[DISCUSSION] Moving TFLite cmake module to seperate repository

I implemented a hello world example this morning on a STM324xG_EVAL (which worked :), see here) and I had to bring in some changes I had in my original CMake for that project (which was also very hacky and pretty crap). As we at TUM are building up many examples and thus many repositories I would be a fan of having a single FindTFLite.cmake that we can maintain for TF code base changes, new architectures/boards, build types etc.

I would be happy to maintain this as I and my students will probably use it a lot.

We can add it as a fetch dependency to the main CMakeLists to remove any hassle.

Let me know what you think.

Central place to specify tensor alignment

On some platforms or with some optimizations tensors need to be 8 to 16 byte aligned. This should be configurable in a central place (#define). Maybe a different alignment for 0D, 1D and 2+D tensors is preferred?

[IMPROVEMENT]

What should be improved?
Instructions to how to use the code in (initial README.md)

Describe the solution you'd like
The current instructions for to use this respiratory are not very clear. (I am an experienced C programmer but might lack some basic understanding of a complex build system like this, so please be patient with me)
Pulling the repository is fine. The next part basically instruct you to cd to build and run cmake -DGET_TF_SRC=ON ..
This is fine but please add a warning that this can take very long...
Some variations are discussed after that which I did not execute?
The next step to actually compile a model is where I think things are unclear.

The instructions says: cd ../tensorflow
No such directory is present after the cmake run above. After some snooping I found the folder in tflite_micro_compiler/cmake/_deps/tf-src/
This an awkward place for the folder and I can only assume that is because that is the default path of the scrip somewhere?

The next step also fails: ./compiler hello_world.tflite hello_compiled.cpp hello_
I assume that this implies you are still in the tensor-flow folder. Unfortunately there is no executable called ./compiler generated due to the failure of the previous step? (I also suggest to change the name of the executable compiler because the model compiler can be confused with the c compilers)

Next it would be nice to know how to C-compile the examples: like which files are compiled, is there a make file or a file list somewhere or a directory with all the files? (A folder of files would be very nice if you want to copy and integrate this into a different project)
Last but not least, if you have your own tflite model, how do you build and compile your own model?

I am also happy to test your suggestions and even update the document. (but will need just a bit of direction as requested above please)

Having said all that, thanks for the effort, this looks like an awesome idea.

[FEATURE]Omit constant initializations from the table

Don't add constant (same for all nodes/tensors) values to the info arrays but emit a statement into the for loop instead. This saves text space especially for homogenous models.

E.g.: Don't add quantization information to each tensor initialization in a non-quantized model.

Use enum for indexing g_registrations

E.g.
g_registrations[0] = tflite::ops::micro::Register_QUANTIZE();
status = g_registrations[0]->invoke(&g_ctx, &g_nodes[0]);
becomes
g_registrations[GREG_QUANTIZE] = tflite::ops::micro::Register_QUANTIZE();
status = g_registrations[GREG_QUANTIZE]->invoke(&g_ctx, &g_nodes[0]);

especially the second change helps a lot during debugging and understanding the resulting code.

Better custom support

Custom libraries might define operators which are known to tensorflow lite, but not lite micro. This will omit declaring the Register function (which is missing from micro_ops.h) and will also not correctly encode opdata.
Implement a way to attach hooks to the code generator to better support adding missing operators.

MicroInterpreter::input(N)/output(N) like interface

(My) TfLiteMicro code typically uses interpreter->input(0) and interpreter->output(0) to access data and compare dimensions. Also the example uses it https://github.com/tensorflow/tensorflow/blob/edbe5e189c1ec14d3a3386aa29e6118d807d9379/tensorflow/lite/micro/examples/hello_world/main_functions.cc#L82
Implement a similar interface for compiled code returning the TfLiteTensor pointer - this way existing code can be more easily adapted to using compiled code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.