Giter VIP home page Giter VIP logo

tensorflow / tflite-support Goto Github PK

View Code? Open in Web Editor NEW
361.0 27.0 125.0 146.69 MB

TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.

License: Apache License 2.0

Starlark 9.46% Shell 0.74% Java 18.46% C++ 44.85% Python 13.98% C 2.35% Objective-C 7.37% Swift 0.74% Objective-C++ 0.67% Batchfile 0.03% Jupyter Notebook 1.30% Makefile 0.05%

tflite-support's Introduction

TensorFlow Lite Support

TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile devices. It works cross-Platform and is supported on Java, C++ (WIP), and Swift (WIP). The TFLite Support project consists of the following major components:

  • TFLite Support Library: a cross-platform library that helps to deploy TFLite models onto mobile devices.
  • TFLite Model Metadata: (metadata populator and metadata extractor library): includes both human and machine readable information about what a model does and how to use the model.
  • TFLite Support Codegen Tool: an executable that generates model wrapper automatically based on the Support Library and the metadata.
  • TFLite Support Task Library: a flexible and ready-to-use library for common machine learning model types, such as classification and detection, client can also build their own native/Android/iOS inference API on Task Library infra.

TFLite Support library serves different tiers of deployment requirements from easy onboarding to fully customizable. There are three major use cases that TFLite Support targets at:

  • Provide ready-to-use APIs for users to interact with the model.
    This is achieved by the TFLite Support Codegen tool, where users can get the model interface (contains ready-to-use APIs) simply by passing the model to the codegen tool. The automatic codegen strategy is designed based on the TFLite metadata.

  • Provide optimized model interface for popular ML tasks.
    The model interfaces provided by the TFLite Support Task Library are specifically optimized compared to the codegen version in terms of both usability and performance. Users can also swap their own custom models with the default models in each task.

  • Provide the flexibility to customize model interface and build inference pipelines.
    The TFLite Support Util Library contains varieties of util methods and data structures to perform pre/post processing and data conversion. It is also designed to match the behavior of TensorFlow modules, such as TF.Image and TF.text, ensuring consistency from training to inferencing.

See the documentation on tensorflow.org for more instruction and examples.

Build Instructions

We use Bazel to build the project. When you're building the Java (Android) Utils, you need to set up following env variables correctly:

  • ANDROID_NDK_HOME
  • ANDROID_SDK_HOME
  • ANDROID_NDK_API_LEVEL
  • ANDROID_SDK_API_LEVEL
  • ANDROID_BUILD_TOOLS_VERSION

How to contribute

Please issue a pull request and assign @lu-wang-g for a code review.

Contact us

Let us know what you think about TFLite Support by creating a new Github issue, or email us at [email protected].

tflite-support's People

Contributors

am15h avatar cushon avatar farmaker47 avatar fergushenderson avatar flamearrow avatar gribozavr avatar jonpsy avatar khanhlvg avatar kinarr avatar lintian06 avatar lu-wang-g avatar markmcd avatar miaout17 avatar milindthakur177 avatar multiverse-tf avatar priankakariatyml avatar rwgk avatar schmidt-sebastian avatar talumbau avatar tensorflower-gardener avatar terryheo avatar tflite-support-robot avatar thaink avatar utzcoz avatar vihangaaw avatar wangtz avatar xunkai55 avatar yilei avatar zetafunction avatar ziyeqinghan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tflite-support's Issues

"bert_question_answerer.h" depends on non exsiting file in tensorflow lite

Hi,

I tried to use the files in the "qa" task and got the following include error:

cannot open source file \"tensorflow/lite/experimental/acceleration/configuration/configuration.pb.h\" (dependency of \"tensorflow_lite_support/cc/task/text/qa/bert_question_answerer.h\")

There is no such a file at the tensorflow repository: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/acceleration/configuration.

Thanks,
Mohamed.

Publish to Maven Central or Google Maven

Seems like TensorFlow Lite support and base artifacts are hosted on JCenter.

The TensorFlow repo address at pages linked above is https://dl.bintray.com/google/tensorflow/.

JFrog announced a full Bintray / JCenter shutdown on May 1st, 2021. It would be nice to have these artifacts available somewhere. While the title states Maven Central, it might be the Google Maven repo if it’s more comfortable for Google folks.

Tflite Quant inference is slower than TFlite float32 on Intel CPU

I have converted a network into TFlite using DEFAULT optimization (Float32) setting and its inference speed is around 25 fps. Same network when i converted into TFlite INT8 Quantized and its inference speed is around 2 fps on INTEL 8-Core Intel Core i9 2.3 GHz. Is this expected on CPU? Please can somebody explain what causes the slowness of INT8 inference.

Pre-process input for InceptionV3 tflite model

I have successfully converted an InceptionV3 model to tflite using toco and inference type FLOAT. I have verified that the tflite model will produce good classification confidence levels as long as the image input data is pre-processed the exact same way as the input to the .pb model. I verified this by extracting the preprocessed data on .NET and running the exact same input on the tflite model in Android.

The problem I'm having is formatting the image input using tflite on Android to get the same classification confidence levels. Here is the method used to preprocess data with SciSharp https://github.com/SciSharp/SciSharp-Stack-Examples/blob/master/src/TensorFlowNET.Examples/ImageProcessing/TransferLearningWithInceptionV3.cs

        private (Tensor, Tensor) add_jpeg_decoding()
        {
            // height, width, depth
            var input_dim = (299, 299, 3);
            var jpeg_data = tf.placeholder(tf.@string, name: "DecodeJPGInput");
            var decoded_image = tf.image.decode_jpeg(jpeg_data, channels: input_dim.Item3);
            // Convert from full range of uint8 to range [0,1] of float32.
            var decoded_image_as_float = tf.image.convert_image_dtype(decoded_image, tf.float32);
            var decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0);
            var resize_shape = tf.stack(new int[] { input_dim.Item1, input_dim.Item2 });
            var resize_shape_as_int = tf.cast(resize_shape, dtype: tf.int32);
            var resized_image = tf.image.resize_bilinear(decoded_image_4d, resize_shape_as_int);
            return (jpeg_data, resized_image);
        }

I can replicate the image pre processing up to the tf.expand_dims line. Is there a way to effectively do all of this preprocessing in Android using tflite-support? These are the commands I'm having trouble replicating:

            var decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0);
            var resize_shape = tf.stack(new int[] { input_dim.Item1, input_dim.Item2 });
            var resize_shape_as_int = tf.cast(resize_shape, dtype: tf.int32);
            var resized_image = tf.image.resize_bilinear(decoded_image_4d, resize_shape_as_int);

I understand there is ImageProcessor with a ResizeOp, but that does not give me the same results as the processing the image in .NET with the above commands. Is there a way to replicate the above 4 lines using tflite-support methods?

TensorFlowLiteTaskText requires outdated GoogleToolboxForMac

Currently the Cocoapods podspec for TensorFlowLiteTaskText has a dependency on Google Toolbox with an explicit version ('GoogleToolboxForMac', '2.2.1'). This is no longer the latest version of the Toolbox though which is starting to cause conflicts with other Pods. Any chance the podspec could be updated to use major version pinning so it can adjust as Google releases new minor and patch releases (i.e. 'GoogleToolboxForMac', '~> 2.0')?

tflite quantized model producing different results on CPU / NNAPI+NPU

I have a quantized tflite model that is deployed on a system equipped with a NPU and NNAPI. I noticed that the same module produces different outputs when the inference is performed on a CPU instead of NPU+NNAPI. Sometimes the differences are very high. What is the reason for this?

can not download sources in android by gradle

I am runing the demo 'https://github.com/tensorflow/examples/tree/master/lite/examples/text_classification/android' in android. When i try to download the source of 'tensorflow-lite-task-text-0.0.0-nightly@arr' with gradle, i got the error as follows:

`16:39:51: Executing task 'DownloadSources --info'...

Executing tasks: [DownloadSources] in project D:\androidstudioprogram\examples\lite\examples\text_classification\android

The client will now receive all logging from the daemon (pid: 14264). The daemon log file: C:\Users\DELL.gradle\daemon\6.1.1\daemon-14264.out.log
Starting 7th build in daemon [uptime: 1 hrs 30 mins 29.813 secs, performance: 100%]
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Using 4 worker leases.
Starting Build
Settings evaluated using settings file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\settings.gradle'.
Projects loaded. Root project using build file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\build.gradle'.
Included projects: [root project 'TFLite Text Classification Demo App', project ':app', project ':lib_interpreter', project ':lib_task_api']

Configure project :
Evaluating root project 'TFLite Text Classification Demo App' using build file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\build.gradle'.
Invalidating in-memory cache of C:\Users\DELL.gradle\caches\journal-1\file-access.bin
Invalidating in-memory cache of C:\Users\DELL.gradle\caches\6.1.1\fileHashes\fileHashes.bin
Invalidating in-memory cache of C:\Users\DELL.gradle\caches\6.1.1\fileHashes\resourceHashesCache.bin

Configure project :app
Evaluating project ':app' using build file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\app\build.gradle'.
Creating configuration androidTestUtil

Configure project :lib_interpreter
Evaluating project ':lib_interpreter' using build file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\lib_interpreter\build.gradle'.
Creating configuration androidTestUtil

Configure project :lib_task_api
Evaluating project ':lib_task_api' using build file 'D:\androidstudioprogram\examples\lite\examples\text_classification\android\lib_task_api\build.gradle'.
Creating configuration androidTestUtil
All projects evaluated.
Analytics other plugin to proto: Unknown plugin type de.undercouch.gradle.tasks.download.DownloadTaskPlugin expected enum DE_UNDERCOUCH_GRADLE_TASKS_DOWNLOAD_DOWNLOADTASKPLUGIN
Analytics other plugin to proto: Unknown plugin type de.undercouch.gradle.tasks.download.DownloadTaskPlugin expected enum DE_UNDERCOUCH_GRADLE_TASKS_DOWNLOAD_DOWNLOADTASKPLUGIN
Selected primary task 'DownloadSources' from project :
Tasks to be executed: [task ':lib_task_api:DownloadSources']
:lib_task_api:DownloadSources (Thread[Daemon worker Thread 5,5,main]) started.

Task :lib_task_api:DownloadSources FAILED
Task :lib_task_api:DownloadSources in app Starting
Caching disabled for task ':lib_task_api:DownloadSources' because:
Build cache is disabled
Task ':lib_task_api:DownloadSources' is not up-to-date because:
Task has not declared any outputs despite executing actions.
Resource missing. [HTTP HEAD: https://dl.google.com/dl/android/maven2/org/tensorflow/tensorflow-lite-task-text/0.0.0-nightly@aar/[email protected]]
Resource missing. [HTTP HEAD: https://jcenter.bintray.com/org/tensorflow/tensorflow-lite-task-text/0.0.0-nightly@aar/[email protected]]
Task :lib_task_api:DownloadSources in app Finished
:lib_task_api:DownloadSources (Thread[Daemon worker Thread 5,5,main]) completed. Took 0.824 secs.
1 actionable task: 1 executed

FAILURE: Build failed with an exception.

  • Where:
    Initialization script 'C:\Users\DELL\AppData\Local\Temp\ijmiscinit1.gradle' line: 20

  • What went wrong:
    Execution failed for task ':lib_task_api:DownloadSources'.

Could not resolve all files for configuration ':lib_task_api:downloadSources_6a4637ed-031b-43a8-a402-10ca98ba8740'.
Could not find org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly@aar.
Required by:
project :lib_task_api

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 1s
16:39:53: Task execution finished 'DownloadSources --info'.
`

When can we use the tf lite C++ support?

thanks for your awesome work. and I'm very hungry to use the tf lite C++ in Android jni, but it seems haven't finished. did you have the time when we can use it?

Evaluating Imagenet accuracy of TF Lite models

Hi,

I used the official Imagenet accuracy evaluation tool at tensorflow/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification/ on the official quantized MobileNet V2. However, the reported accuracy is far below the expected. The MB V2 is downloaded from https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224_quantized/1/default/1. The run settings below have followed the instructions as much as possible. Please help explain the accuracy gap.

Command:

bazel run -c opt \
-- \
//tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification:run_eval \
--model_file=/home/ubuntu/workspace/tensorflow/mobilenet_v2_1.0_224_quantized_1_default_1.tflite \
--ground_truth_images_path=${IMAGENET_IMAGES_DIR} \
--ground_truth_labels=${VALIDATION_LABELS} \
--model_output_labels=${MODEL_LABELS_TXT_LONG} \
--output_file_path=/tmp/accuracy_output.txt \
--num_interpreter_threads=4 \
--num_images=0

Output:

Num evaluation runs: 50000
Preprocessing latency: avg=7697.15(us), std_dev=0(us)
Inference latency: avg=77301.9(us), std_dev=6485(us)
Top-1 Accuracy: 0.35942
Top-2 Accuracy: 0.41372
Top-3 Accuracy: 0.43532
Top-4 Accuracy: 0.4475
Top-5 Accuracy: 0.4553
Top-6 Accuracy: 0.46088
Top-7 Accuracy: 0.46498
Top-8 Accuracy: 0.46842
Top-9 Accuracy: 0.47142
Top-10 Accuracy: 0.4736

NameError: name 'ImageClassifierWriter' is not defined

Environment Info

  • Python 3.7.3
  • Tensorflow 1.15 built from source
  • Pip versions:
tflite-support==0.1.0rc5
tflite-support-nightly==0.1.0.dev2020114

Script

from tflite_support.metadata_writers import object_detector

ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = "./tflite/detect.tflite"
_LABEL_FILE = "./tflite/labelmap.txt"
_SAVE_TO_PATH = "./output/detect.tflite"

with open(_MODEL_PATH, "rb") as file:
  model_buffer = file.read()

writer = ImageClassifierWriter.create_for_inference(
    model_buffer, [127.5], [127.5], [_LABEL_FILE])
new_model = writer.populate()

with open(_SAVE_TO_PATH, "wb") as file:
  file.write(new_model)

I get the following error:

Traceback (most recent call last):
  File "./3_add_metadata_for_realz.py", line 11, in <module>
    writer = ImageClassifierWriter.create_for_inference(
NameError: name 'ImageClassifierWriter' is not defined

Please be gentle, I am a noob.

Bazel 3.4.1 Build Error

Hi, I was trying to build the support library with bazel 3.4.1 on Arch Linux. I haven't found any docs related to how to build the package so I use the following command to build:

$ bazel build tensorflow_lite_support/java:tensorflowlite_support_java

But I got the following errors:

Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=172
INFO: Reading rc options for 'build' from /home/jcyang/Projects/tensorflow-lite-support/.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/jcyang/Projects/tensorflow-lite-support/.bazelrc:
  'build' options: --apple_platform_type=macos --enable_platform_specific_config --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --action_env ANDROID_NDK_HOME --action_env ANDROID_NDK_API_LEVEL --action_env ANDROID_BUILD_TOOLS_VERSION --action_env ANDROID_SDK_API_LEVEL --action_env ANDROID_SDK_HOME --define framework_shared_object=true --define open_source_build=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=short_logs --config=v2
INFO: Found applicable config definition build:short_logs in file /home/jcyang/Projects/tensorflow-lite-support/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/jcyang/Projects/tensorflow-lite-support/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:linux in file /home/jcyang/Projects/tensorflow-lite-support/.bazelrc: --copt=-w --cxxopt=-std=c++14 --host_cxxopt=-std=c++14
Internal error thrown during build. Printing stack trace: java.lang.RuntimeException: Unrecoverable error while evaluating node '@org_tensorflow//tensorflow/lite/java:tensorflowlite_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]' (requested by nodes '//tensorflow_lite_support/java:tensorflowlite_support_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]')
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:513)
        at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:398)
        at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1409)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.NullPointerException
        at com.google.devtools.build.lib.rules.android.BusyBoxActionBuilder.addAapt(BusyBoxActionBuilder.java:315)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.createAapt2ApkAction(AndroidResourcesProcessorBuilder.java:279)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.build(AndroidResourcesProcessorBuilder.java:208)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.buildWithoutLocalResources(AndroidResourcesProcessorBuilder.java:179)
        at com.google.devtools.build.lib.rules.android.ResourceApk.processFromTransitiveLibraryData(ResourceApk.java:328)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:178)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:42)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createRule(ConfiguredTargetFactory.java:350)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createConfiguredTarget(ConfiguredTargetFactory.java:185)
        at com.google.devtools.build.lib.skyframe.SkyframeBuildView.createConfiguredTarget(SkyframeBuildView.java:887)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.createConfiguredTarget(ConfiguredTargetFunction.java:971)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.compute(ConfiguredTargetFunction.java:354)
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:438)
        ... 7 more

INFO: Elapsed time: 2.026s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (34 packages loaded, 520 targets configured)
Internal error thrown during build. Printing stack trace: java.lang.RuntimeException: Unrecoverable error while evaluating node '@org_tensorflow//tensorflow/lite/java:tensorflowlite_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]' (requested by nodes '//tensorflow_lite_support/java:tensorflowlite_support_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]')
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:513)
        at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:398)
        at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1409)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.NullPointerException
        at com.google.devtools.build.lib.rules.android.BusyBoxActionBuilder.addAapt(BusyBoxActionBuilder.java:315)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.createAapt2ApkAction(AndroidResourcesProcessorBuilder.java:279)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.build(AndroidResourcesProcessorBuilder.java:208)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.buildWithoutLocalResources(AndroidResourcesProcessorBuilder.java:179)
        at com.google.devtools.build.lib.rules.android.ResourceApk.processFromTransitiveLibraryData(ResourceApk.java:328)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:178)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:42)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createRule(ConfiguredTargetFactory.java:350)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createConfiguredTarget(ConfiguredTargetFactory.java:185)
        at com.google.devtools.build.lib.skyframe.SkyframeBuildView.createConfiguredTarget(SkyframeBuildView.java:887)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.createConfiguredTarget(ConfiguredTargetFunction.java:971)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.compute(ConfiguredTargetFunction.java:354)
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:438)
        ... 7 more
java.lang.RuntimeException: Unrecoverable error while evaluating node '@org_tensorflow//tensorflow/lite/java:tensorflowlite_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]' (requested by nodes '//tensorflow_lite_support/java:tensorflowlite_support_java BuildConfigurationValue.Key[6a4998d59cf65df4e38174ac5f7b2f3ef6f8a16628af6d2a1e8b82438a5cd66d]')
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:513)
        at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:398)
        at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1409)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.NullPointerException
        at com.google.devtools.build.lib.rules.android.BusyBoxActionBuilder.addAapt(BusyBoxActionBuilder.java:315)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.createAapt2ApkAction(AndroidResourcesProcessorBuilder.java:279)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.build(AndroidResourcesProcessorBuilder.java:208)
        at com.google.devtools.build.lib.rules.android.AndroidResourcesProcessorBuilder.buildWithoutLocalResources(AndroidResourcesProcessorBuilder.java:179)
        at com.google.devtools.build.lib.rules.android.ResourceApk.processFromTransitiveLibraryData(ResourceApk.java:328)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:178)
        at com.google.devtools.build.lib.rules.android.AndroidLibrary.create(AndroidLibrary.java:42)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createRule(ConfiguredTargetFactory.java:350)
        at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createConfiguredTarget(ConfiguredTargetFactory.java:185)
        at com.google.devtools.build.lib.skyframe.SkyframeBuildView.createConfiguredTarget(SkyframeBuildView.java:887)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.createConfiguredTarget(ConfiguredTargetFunction.java:971)
        at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.compute(ConfiguredTargetFunction.java:354)
        at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:438)
FAILED: Build did NOT complete successfully (34 packages loaded, 520 targets configured)

Some environment variables for your reference:

ANDROID_API_LEVEL=30
ANDROID_BUILD_TOOLS_VERSION=30.0.2
ANDROID_NDK_API_LEVEL=21
ANDROID_NDK_HOME=/home/jcyang/.android-sdk/ndk/18.1.5063045
ANDROID_SDK_HOME=/home/jcyang/.android-sdk

Test: A feature request should notify TFL team.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Tracking: Metadata extration support for both Java and Swift

I would just like to set up an issue to track when this lib can be used to extract labels from a tflite bundle in both iOS and Android. Current support:

  • Android, Java
  • iOS, Swift

For context:
I'm using Firebase ML to manage custom models for my apps, however managing the labels separately from the model is a pain point. I prefer to solve uniformly in both apps to keep things simple, so I need both java and swift implementations to be ready.

Converting model to tflite without optimization option

I converted saved models to tflite without any optimization option. That's like,

model = tf.keras.models.load_model('saved_model')
converter = tf.lite.TFLiteConverter.from_keras_model(model)

I assumed above conversion is done under "default" optimization setting below:
converter.optimizations = [tf.lite.Optimize.DEFAULT]

However, I got a different result when I explicitly added the optimization option.
Converted tflite models got much smaller sizes with the option:
Without option: x1.02 ~ x3.84 smaller
With option: x4.08 ~ x11.999 smaller

It seems like quantization (32float -> 8int) is implemented only with the option since it made the models x4 times smaller.
Then what made the first conversion without option get smaller models?
Also, why one of the models didn't get so smaller? (only x1.02 smaller)

Thanks

System information

TensorFlow version (installed from source or binary): 2.3.0

Python version: 3.8.3

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

Upon converting .pb file to .tflite I get the following error:

2021-04-04 23:07:05.398922: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
I0404 23:07:14.163655 1356 saver.py:1503] Saver not created because there are no variables in the graph to restore
2021-04-04 23:07:14.182173: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2021-04-04 23:07:15.144923: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2021-04-04 23:07:15.145101: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2021-04-04 23:07:15.147434: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2021-04-04 23:07:15.158033: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2021-04-04 23:07:15.162610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2021-04-04 23:07:15.162774: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2021-04-04 23:07:15.162945: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2021-04-04 23:07:19.332432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-04 23:07:19.332565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2021-04-04 23:07:19.332680: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2021-04-04 23:07:19.336138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2996 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "C:\Users\test\anaconda3\envs\tf2\Scripts\tflite_convert-script.py", line 10, in
sys.exit(main())
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\lite\python\tflite_convert.py", line 515, in main
app.run(main=run_main, argv=sys.argv[:1])
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\lite\python\tflite_convert.py", line 502, in run_main
_convert_tf2_model(tflite_flags)
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\lite\python\tflite_convert.py", line 221, in _convert_tf2_model
tflite_model = converter.convert()
File "C:\Users\test\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\lite\python\lite.py", line 400, in convert
raise ValueError("This converter can only convert a single "
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

The command entered is following:

tflite_convert --graph_def_file=tflite_graph.pb --output_file=detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess --allow_custom_ops --saved_model_dir=\object_detection\tflite\

Windows 10
Tensorflow 2.0.0
Python 3.7.9

tflite model giving extremely FPS

I'm trying to run this tflite model and it's taking about 0.5 seconds for prediction on each frame. While on android and iOS phones, it gives over 20 FPS.

Refer to my issue for more details

Logging assets path are wrong

image

it looks like the paths that are printed by a logger are wrong and do not even exist. Or am I missing anything here?

TFlite Variable output dimension

Hello,
In the tflite java inference docs its mentioned that there is no straight forward way to interpreter.run a model with variable length outputs however support for this is a planned feature, is there a reference to the non straightforward way you could provide?
Would implementing the model via the C++ api be a better approach?
Many thanks

Kotlin support in future?

Hi, just curious if support for kotlin is planned in the future?
Desirably, with some basic examples similar as described in docs page.
As of today, what is the best practice to integrate tflite-support for Kotlin project?
Thanks, find the lib useful :D

Out of Memory using tflite on Android

I get an out of memory error condition after about 20 minutes of running the https://github.com/android/camera-samples/tree/main/CameraXTfLite example. I've heavily hacked the code to repeatedly process a sigle image file and isolated the problem to the tf image processing section. My memory diagnostics show there is plenty of memory available but maybe it is becoming too fragmented? It fails on my Moto G 5 and the android studio emulated Pixel 2 API 27 after about 1850 iterations

Any ideas?

Zip of the whole project is available here:

https://drive.google.com/file/d/1GMvGstidySSFZTM_WBEolVXn_XcACVpw/view?usp=sharing

/*
 * Copyright 2020 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.example.android.camerax.tflite

import android.app.ActivityManager
import android.content.Context
import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.os.Bundle
import android.os.Debug
import android.util.Log
import android.util.Size
import androidx.appcompat.app.AppCompatActivity
import androidx.constraintlayout.widget.ConstraintLayout
import com.android.example.camerax.tflite.R
import kotlinx.coroutines.GlobalScope
import org.tensorflow.lite.DataType
import org.tensorflow.lite.Interpreter
import org.tensorflow.lite.nnapi.NnApiDelegate
import org.tensorflow.lite.support.common.FileUtil
import org.tensorflow.lite.support.common.ops.NormalizeOp
import org.tensorflow.lite.support.image.ImageProcessor
import org.tensorflow.lite.support.image.TensorImage
import org.tensorflow.lite.support.image.ops.ResizeOp
import org.tensorflow.lite.support.image.ops.ResizeWithCropOrPadOp
import org.tensorflow.lite.support.image.ops.Rot90Op
import java.io.IOException


/** Activity that displays the camera and performs object detection on the incoming frames */
class CameraActivity : AppCompatActivity() {

    private lateinit var container: ConstraintLayout

    private lateinit var bitmap: Bitmap

    private val tfImageBuffer = TensorImage(DataType.UINT8)

    private val tfImageProcessor by lazy {
        val cropSize = minOf(bitmap.width, bitmap.height)
        ImageProcessor.Builder()
            .add(ResizeWithCropOrPadOp(cropSize, cropSize))
            .add(
                ResizeOp(
                    tfInputSize.height, tfInputSize.width, ResizeOp.ResizeMethod.NEAREST_NEIGHBOR
                )
            )
            .add(Rot90Op(-0 / 90))
            .add(NormalizeOp(0f, 1f))
            .build()
    }

    private val tflite by lazy {
        Interpreter(
            FileUtil.loadMappedFile(this, MODEL_PATH),
            Interpreter.Options().addDelegate(NnApiDelegate())
        )
    }

    private val detector by lazy {
        ObjectDetectionHelper(
            tflite,
            FileUtil.loadLabels(this, LABELS_PATH)
        )
    }

    private val tfInputSize by lazy {
        val inputIndex = 0
        val inputShape = tflite.getInputTensor(inputIndex).shape()
        Size(inputShape[2], inputShape[1]) // Order of axis is: {1, height, width, 3}
    }

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_camera)
        container = findViewById(R.id.camera_container)

        // get bitmap from assets folder
        bitmap = assetsToBitmap("image20130118135124164.jpg")!!

        val oneMB = 1024 * 1024
        var memoryInfo = ActivityManager.MemoryInfo()
        for (i in 1..5000) {
            GlobalScope.run {
                val prediction = processImage()
                Log.i(TAG, " $i ${"%.2f".format(prediction?.score)} ${prediction?.label}")

                val nativeHeapSize = Debug.getNativeHeapSize()
                val nativeHeapFreeSize = Debug.getNativeHeapFreeSize()
                (getSystemService(ACTIVITY_SERVICE) as ActivityManager).getMemoryInfo(
                    memoryInfo
                )
                val miHeapSize = memoryInfo.totalMem
                val miHeapFreeSize = memoryInfo.availMem
                Log.i(TAG, "Debug native heap = ${nativeHeapSize / oneMB}, free = ${nativeHeapFreeSize / oneMB}, used = ${(nativeHeapSize - nativeHeapFreeSize) / oneMB}," +
                        " MemInfo heap = ${miHeapSize / oneMB}, free = ${miHeapFreeSize / oneMB}, used = ${(miHeapSize - miHeapFreeSize) / oneMB} MB")
            }
        }
    }

    private fun processImage()  : ObjectDetectionHelper.ObjectPrediction? {

        // Process the image in Tensorflow
        tfImageBuffer.load(this.bitmap)
        val tfImage = tfImageProcessor.process(tfImageBuffer)

        // Perform the object detection for the current frame
        val predictions = detector.predict(tfImage)

        // Report only the top prediction
        return predictions.maxByOrNull { it.score }
    }

    // extension function to get bitmap from assets
    private fun Context.assetsToBitmap(fileName: String): Bitmap? {
        return try {
            with(assets.open(fileName)) {
                BitmapFactory.decodeStream(this)
            }
        } catch (e: IOException) {
            null
        }
    }

    companion object {
        private val TAG = CameraActivity::class.java.simpleName

        private const val ACCURACY_THRESHOLD = 0.5f
        private const val MODEL_PATH = "coco_ssd_mobilenet_v1_1.0_quant.tflite"
        private const val LABELS_PATH = "coco_ssd_mobilenet_v1_1.0_labels.txt"
    }
}
I/CameraActivity:  1847 0.46 person
    Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 899, used = 613 MB
I/CameraActivity:  1848 0.46 person
    Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 900, used = 611 MB
I/CameraActivity:  1849 0.46 person
    Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 898, used = 613 MB
I/CameraActivity:  1850 0.46 person
I/CameraActivity: Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 900, used = 612 MB
I/CameraActivity:  1851 0.46 person
    Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 899, used = 612 MB
I/CameraActivity:  1852 0.46 person
I/CameraActivity: Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 900, used = 611 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1853 0.46 person
I/CameraActivity: Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 899, used = 612 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1854 0.46 person
    Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 900, used = 611 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1855 0.46 person
    Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 900, used = 611 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1856 0.46 person
    Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 901, used = 610 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1857 0.46 person
I/CameraActivity: Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 900, used = 611 MB
E/zygote: Can't map shared memory.
E/zygote: Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1858 0.46 person
I/CameraActivity: Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 902, used = 609 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1859 0.46 person
I/CameraActivity: Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 901, used = 610 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1860 0.46 person
    Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 903, used = 608 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1861 0.46 person
I/CameraActivity: Debug native heap = 18, free = 2, used = 15, MemInfo heap = 1512, free = 902, used = 610 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
I/CameraActivity:  1862 0.46 person
    Debug native heap = 18, free = 3, used = 15, MemInfo heap = 1512, free = 903, used = 608 MB
E/zygote: Can't map shared memory.
    Could not map pool
    Can't map shared memory.
    Could not map pool
W/libc: pthread_create failed: couldn't allocate 1036288-bytes mapped space: Out of memory
E/libc++abi: terminating with uncaught exception of type std::__1::system_error: thread constructor failed: Try again
A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 17682 (.camerax.tflite), pid 6488 (.camerax.tflite)
W/libc: pthread_create failed: couldn't allocate 1036288-bytes mapped space: Out of memory

Difference between CPU and GPU output for same input

Hello

First of all thank you for making such a wonderful library. This is very helpful.

We were trying to use this library for a model and noticed that there is definite difference between the output from GPU vs CPU for a tflite model (for the same input tensorbuffer). Is there a particular reason for the same. We used the following kotlin code to create options for the model.

val compatibilityList = CompatibilityList();
val options = if(compatibilityList.isDelegateSupportedOnThisDevice){
    Log.d("Output", "This Device is GPU compatible")
    Model.Options.Builder().setDevice(Model.Device.GPU).build();

} else {
    Log.d("Output", "This Device is not GPU compatible")
    Model.Options.Builder().setNumThreads(4).build();
}

BTW, we tried the same model with tflite Interpreter Java library and output from both GPU and CPU was same. Appreciate if you have any inputs on the same. Thanks a lot for your help.

TFLite runtime with select ops support in windows

Hi,

I'm using a tflite model in android which runs without problem when I use "org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly" because it contains some tensorflow operations (not completly tflite standard operations). In windows on the other hand, when I use the same model, I should install Tensorflow 2.4 as said in this link:

"Note: TensorFlow Lite with select TensorFlow ops are available in the TensorFlow pip package version since 2.3 for Linux and 2.4 for other environments."

My question is: Is there any way to use tflite model in windows using python language which supports select ops without installing complete Tensorflow 2.4 package?

Channel-specific NormalizeOp support [feature request?]

Some models pre-process channels by different amounts, based on the average pixel values in separate channels of the training dataset. When using future data, we would have to preprocess the input data as they do: For example, preprocessing for vggface subtracts different values from the red, green and blue channels (some code here).

I don't think this is supported by tf.image (because it generates pixel values which are not meaningful as images, only as arrays?).

Unrelated:
I was unsure what is meant here

Note: The returned {@link TensorBuffer} is always a {@link DataType#FLOAT32} tensor at present, except that the input is a {@link DataType#UINT8} tensor, {@code mean} is set to 0 and {@code stddev} is set to 1.

What does it mean: "except that the input is a UINT8 Tensor?
Do you mean "expect that the input is a UINT8 Tensor"?

Fitting tflite models

Hello,

Is there any way to use something like the Keras "fit()" function using tflite? Currently, I am sending the data back to a server that is running the fit function, converting the model back to tflite, and sending it back to the Raspberry Pi. Is there a more efficient way to do this?

JNI DETECTED ERROR IN APPLICATION

When following the example on the task library page, I get the following error:

2020-09-24 11:56:35.734 19421-20198/com.native_ai E/libc: Access denied finding property "vendor.camera.aux.packagelist"
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] JNI DETECTED ERROR IN APPLICATION: JNI NewStringUTF called with pending exception java.lang.NoSuchMethodError: no static method "Lorg/tensorflow/lite/support/label/Category;.create(Ljava/lang/String;Ljava/lang/String;F)Lorg/tensorflow/lite/support/label/Category;"
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detectNative(long, java.nio.ByteBuffer, int, int, int) (ObjectDetector.java:-2)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detect(org.tensorflow.lite.support.image.TensorImage, org.tensorflow.lite.task.core.vision.ImageProcessingOptions) (ObjectDetector.java:312)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detect(org.tensorflow.lite.support.image.TensorImage) (ObjectDetector.java:292)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.models.SSD.runModel() (SSD.java:54)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.models.Model.supplyFrame(com.utils.ExtendedBitmap) (Model.java:91)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.models.ModelLoadBalancer.supplyFrame(com.utils.ExtendedBitmap) (ModelLoadBalancer.java:39)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.models.ModelManager.supplyFrame(com.utils.ExtendedBitmap) (ModelManager.java:137)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.models.-$$Lambda$yoikLCFTO8RafhdJGjpQ4SUV75s.accept(java.lang.Object) (lambda:-1)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.inputstream.Camera2Analyzer.onProcessImage(int[]) (Camera2Analyzer.java:120)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.exercises.base.exercise.ExerciseActivity.processImage() (ExerciseActivity.java:113)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void com.camera.CameraActivity.onImageAvailable(android.media.ImageReader) (CameraActivity.java:258)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void android.media.ImageReader$ListenerHandler.handleMessage(android.os.Message) (ImageReader.java:798)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void android.os.Handler.dispatchMessage(android.os.Message) (Handler.java:107)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void android.os.Looper.loop() (Looper.java:214)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] at void android.os.HandlerThread.run() (HandlerThread.java:67)
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570]
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] in call to NewStringUTF
2020-09-24 11:56:36.488 19421-24788/com.native_ai A/com.native_ai: java_vm_ext.cc:570] from java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detectNative(long, java.nio.ByteBuffer, int, int, int)
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] Runtime aborting...
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] Dumping all threads without mutator lock held
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] All threads:
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] DALVIK THREADS (36):
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "ImageListener" prio=5 tid=31 Runnable
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=0 dsCount=0 flags=0 obj=0x13300000 self=0x77a1b70400
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=24788 nice=0 cgrp=default sched=0/0 handle=0x774aaf2d50
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=R schedstat=( 134176685 52922496 223 ) utm=10 stm=2 core=7 HZ=100
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x774a9f0000-0x774a9f2000 stackSize=1039KB
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes= "abort lock" "mutator lock"(shared held)
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000041186c /apex/com.android.runtime/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits>&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+140)
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 00000000004f9150 /apex/com.android.runtime/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits>&, bool, BacktraceMap*, bool) const+512)
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 0000000000513b80 /apex/com.android.runtime/lib64/libart.so (art::DumpCheckpoint::Run(art::Thread*)+828)
2020-09-24 11:56:36.670 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000050c948 /apex/com.android.runtime/lib64/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+456)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 000000000050be30 /apex/com.android.runtime/lib64/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits>&, bool)+1964)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 00000000004b911c /apex/com.android.runtime/lib64/libart.so (art::Runtime::Abort(char const*)+1452)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #6 pc 000000000000b69c /system/lib64/libbase.so (android::base::LogMessage::~LogMessage()+580)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #7 pc 000000000037828c /apex/com.android.runtime/lib64/libart.so (art::JavaVMExt::JniAbort(char const*, char const*)+1584)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #8 pc 00000000003784b0 /apex/com.android.runtime/lib64/libart.so (art::JavaVMExt::JniAbortV(char const*, char const*, std::__va_list)+108)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #9 pc 000000000036a8d0 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::AbortF(char const*, ...)+136)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #10 pc 00000000003693a4 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::CheckPossibleHeapValue(art::ScopedObjectAccess&, char, art::(anonymous namespace)::JniValueType)+1144)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #11 pc 000000000036878c /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::Check(art::ScopedObjectAccess&, bool, char const*, art::(anonymous namespace)::JniValueType*)+652)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #12 pc 000000000035e54c /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::CheckJNI::NewStringUTF(_JNIEnv*, char const*)+672)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #13 pc 000000000021e674 /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/lib/arm64/libtask_vision_jni.so (???)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #14 pc 0000000000034a84 /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/lib/arm64/libtask_vision_jni.so (Java_org_tensorflow_lite_task_vision_detector_ObjectDetector_detectNative+564)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #15 pc 0000000000140350 /apex/com.android.runtime/lib64/libart.so (art_quick_generic_jni_trampoline+144)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #16 pc 00000000001375b8 /apex/com.android.runtime/lib64/libart.so (art_quick_invoke_static_stub+568)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #17 pc 000000000014600c /apex/com.android.runtime/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+276)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #18 pc 00000000002e3978 /apex/com.android.runtime/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+384)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #19 pc 00000000002dfc88 /apex/com.android.runtime/lib64/libart.so (bool art::interpreter::DoCall<true, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+692)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #20 pc 00000000005a6800 /apex/com.android.runtime/lib64/libart.so (MterpInvokeStaticRange+236)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #21 pc 0000000000131c94 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_static_range+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #22 pc 0000000000192d6a [anon:dalvik-classes3.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes3.dex] (org.tensorflow.lite.task.vision.detector.ObjectDetector.detect+90)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #23 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #24 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #25 pc 0000000000192cf4 [anon:dalvik-classes3.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes3.dex] (org.tensorflow.lite.task.vision.detector.ObjectDetector.detect+16)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #26 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #27 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #28 pc 0000000000059ace [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.models.SSD.runModel+122)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #29 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #30 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #31 pc 00000000000573bc [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.models.Model.supplyFrame+48)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #32 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #33 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #34 pc 0000000000056b04 [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.models.ModelLoadBalancer.supplyFrame+428)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #35 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #36 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #37 pc 000000000005712c [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.models.ModelManager.supplyFrame+16)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #38 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #39 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #40 pc 00000000000568e4 [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.models.-$$Lambda$yoikLCFTO8RafhdJGjpQ4SUV75s.accept+8)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #41 pc 00000000005a2998 /apex/com.android.runtime/lib64/libart.so (MterpInvokeInterface+1788)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #42 pc 0000000000131a14 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_interface+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #43 pc 000000000005389e [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.inputstream.Camera2Analyzer.onProcessImage+170)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #44 pc 00000000005a2998 /apex/com.android.runtime/lib64/libart.so (MterpInvokeInterface+1788)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #45 pc 0000000000131a14 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_interface+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #46 pc 000000000004b4b4 [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.exercises.base.exercise.ExerciseActivity.processImage+52)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #47 pc 00000000005a1178 /apex/com.android.runtime/lib64/libart.so (MterpInvokeVirtual+1352)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #48 pc 0000000000131814 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_virtual+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #49 pc 00000000000452f4 [anon:dalvik-classes2.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes2.dex] (com.camera.CameraActivity.onImageAvailable+256)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #50 pc 00000000005a2998 /apex/com.android.runtime/lib64/libart.so (MterpInvokeInterface+1788)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #51 pc 0000000000131a14 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_interface+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #52 pc 00000000001ccb88 /system/framework/framework.jar (android.media.ImageReader$ListenerHandler.handleMessage+72)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #53 pc 00000000002b4c8c /apex/com.android.runtime/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEbb.llvm.1212182684075602316+240)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #54 pc 0000000000592420 /apex/com.android.runtime/lib64/libart.so (artQuickToInterpreterBridge+1032)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #55 pc 0000000000140468 /apex/com.android.runtime/lib64/libart.so (art_quick_to_interpreter_bridge+88)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #56 pc 000000000201a0e8 /memfd:/jit-cache (deleted) (android.os.Handler.dispatchMessage+168)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #57 pc 000000000201df2c /memfd:/jit-cache (deleted) (android.os.Looper.loop+1372)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #58 pc 00000000001375b8 /apex/com.android.runtime/lib64/libart.so (art_quick_invoke_static_stub+568)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #59 pc 000000000014600c /apex/com.android.runtime/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+276)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #60 pc 00000000002e3978 /apex/com.android.runtime/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+384)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #61 pc 00000000002debd8 /apex/com.android.runtime/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+892)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #62 pc 00000000005a398c /apex/com.android.runtime/lib64/libart.so (MterpInvokeStatic+372)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #63 pc 0000000000131994 /apex/com.android.runtime/lib64/libart.so (mterp_op_invoke_static+20)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #64 pc 000000000030b460 /system/framework/framework.jar (android.os.HandlerThread.run+56)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #65 pc 00000000002b4c8c /apex/com.android.runtime/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEbb.llvm.1212182684075602316+240)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #66 pc 0000000000592420 /apex/com.android.runtime/lib64/libart.so (artQuickToInterpreterBridge+1032)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #67 pc 0000000000140468 /apex/com.android.runtime/lib64/libart.so (art_quick_to_interpreter_bridge+88)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #68 pc 0000000000137334 /apex/com.android.runtime/lib64/libart.so (art_quick_invoke_stub+548)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #69 pc 0000000000145fec /apex/com.android.runtime/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+244)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #70 pc 00000000004b0d14 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::InvokeWithArgArray(art::ScopedObjectAccessAlreadyRunnable const&, art::ArtMethod*, art::(anonymous namespace)::ArgArray*, art::JValue*, char const*)+104)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #71 pc 00000000004b1e28 /apex/com.android.runtime/lib64/libart.so (art::InvokeVirtualOrInterfaceWithJValues(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, jvalue const*)+416)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #72 pc 00000000004f27f4 /apex/com.android.runtime/lib64/libart.so (art::Thread::CreateCallback(void*)+1176)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #73 pc 00000000000e68a0 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #74 pc 0000000000084b6c /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at org.tensorflow.lite.task.vision.detector.ObjectDetector.detectNative(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at org.tensorflow.lite.task.vision.detector.ObjectDetector.detect(ObjectDetector.java:312)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at org.tensorflow.lite.task.vision.detector.ObjectDetector.detect(ObjectDetector.java:292)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.SSD.runModel(SSD.java:54)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.Model.supplyFrame(Model.java:91)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.ModelLoadBalancer.supplyFrame(ModelLoadBalancer.java:39)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.ModelManager.supplyFrame(ModelManager.java:137)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.-$$Lambda$yoikLCFTO8RafhdJGjpQ4SUV75s.accept(lambda:-1)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.inputstream.Camera2Analyzer.onProcessImage(Camera2Analyzer.java:120)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.exercises.base.exercise.ExerciseActivity.processImage(ExerciseActivity.java:113)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.camera.CameraActivity.onImageAvailable(CameraActivity.java:258)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:798)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.Handler.dispatchMessage(Handler.java:107)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.Looper.loop(Looper.java:214)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.HandlerThread.run(HandlerThread.java:67)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "main" prio=10 tid=1 Native
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x716e0b78 self=0x78418cac00
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=19421 nice=-10 cgrp=default sched=0/0 handle=0x7842e35ed0
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 10973764638 1450913420 18723 ) utm=657 stm=440 core=6 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x7ffab3f000-0x7ffab41000 stackSize=8192KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/19421/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000008033c /apex/com.android.runtime/lib64/bionic/libc.so (syscall+28)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c214 /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+164)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 000000000036c5e8 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, std::__va_list, art::Primitive::Type, art::InvokeType)+484)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000035a3c4 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallObjectMethodV(_JNIEnv*, _jobject*, _jmethodID*, std::__va_list)+72)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 0000000000003fcc /apex/com.android.runtime/lib64/libnativehelper.so (_JNIEnv::CallObjectMethod(_jobject*, _jmethodID*, ...)+116)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 00000000001a7094 /system/lib64/libandroid_runtime.so ((anonymous namespace)::Receiver::handleEvent(int, int, void*)+92)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #6 pc 000000000001836c /system/lib64/libutils.so (android::Looper::pollInner(int)+832)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #7 pc 0000000000017f8c /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+56)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #8 pc 000000000013d278 /system/lib64/libandroid_runtime.so (android::android_os_MessageQueue_nativePollOnce(_JNIEnv*, _jobject*, long, int)+44)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.MessageQueue.nativePollOnce(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.MessageQueue.next(MessageQueue.java:336)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.Looper.loop(Looper.java:174)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.app.ActivityThread.main(ActivityThread.java:7711)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.reflect.Method.invoke(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:516)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:950)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "Jit thread pool worker thread 0" prio=1 tid=6 Native
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc02c8 self=0x77af85b000
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20187 nice=19 cgrp=default sched=0/0 handle=0x77b0c53d40
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 737220669 352673083 1150 ) utm=58 stm=15 core=5 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b0b55000-0x77b0b57000 stackSize=1023KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20187/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000008033c /apex/com.android.runtime/lib64/bionic/libc.so (syscall+28)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c214 /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+164)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 00000000005155e8 /apex/com.android.runtime/lib64/libart.so (art::ThreadPool::GetTask(art::Thread*)+256)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000051496c /apex/com.android.runtime/lib64/libart.so (art::ThreadPoolWorker::Run()+144)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 000000000051442c /apex/com.android.runtime/lib64/libart.so (art::ThreadPoolWorker::Callback(void*)+148)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 00000000000e68a0 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #6 pc 0000000000084b6c /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] (no managed stack frames)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "Signal Catcher" prio=1 tid=7 WaitingInMainSignalCatcherLoop
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc0340 self=0x77aad4d800
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20192 nice=19 cgrp=default sched=0/0 handle=0x77b0b3dd50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 416564 1488489 9 ) utm=0 stm=0 core=4 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b0a47000-0x77b0a49000 stackSize=991KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20192/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 00000000000d11d8 /apex/com.android.runtime/lib64/bionic/libc.so (__rt_sigtimedwait+8)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000008fc5c /apex/com.android.runtime/lib64/bionic/libc.so (sigwait+128)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 00000000004daff8 /apex/com.android.runtime/lib64/libart.so (art::SignalCatcher::WaitForSignal(art::Thread*, art::SignalSet&)+392)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 00000000004d9d78 /apex/com.android.runtime/lib64/libart.so (art::SignalCatcher::Run(void*)+268)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 00000000000e68a0 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 0000000000084b6c /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] (no managed stack frames)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "ADB-JDWP Connection Control Thread" prio=1 tid=8 WaitingInMainDebuggerLoop
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc03b8 self=0x77af86c800
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20193 nice=19 cgrp=default sched=0/0 handle=0x77b0a11d50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 4622402 21472653 26 ) utm=0 stm=0 core=7 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b091b000-0x77b091d000 stackSize=991KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20193/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 00000000000d10d8 /apex/com.android.runtime/lib64/bionic/libc.so (__ppoll+8)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000008d6c4 /apex/com.android.runtime/lib64/bionic/libc.so (poll+88)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 0000000000008e24 /apex/com.android.runtime/lib64/libadbconnection.so (adbconnection::AdbConnectionState::RunPollLoop(art::Thread*)+824)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000000721c /apex/com.android.runtime/lib64/libadbconnection.so (adbconnection::CallbackFunction(void*)+1076)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 00000000000e68a0 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 0000000000084b6c /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] (no managed stack frames)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "ReferenceQueueDaemon" prio=5 tid=9 Waiting
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc0430 self=0x77aac88c00
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20195 nice=4 cgrp=default sched=0/0 handle=0x77b07fed50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 3657447 3223646 23 ) utm=0 stm=0 core=6 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b06fc000-0x77b06fe000 stackSize=1039KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20195/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000008033c /apex/com.android.runtime/lib64/bionic/libc.so (syscall+28)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c214 /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+164)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 000000000040cbb4 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, long, int, bool, art::ThreadState)+620)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000040e7e0 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, art::ObjPtrart::mirror::Object, long, int, bool, art::ThreadState)+284)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Object.wait(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - waiting on <0x02dce885> (a java.lang.Class<java.lang.ref.ReferenceQueue>)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Object.wait(Object.java:442)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Object.wait(Object.java:568)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$ReferenceQueueDaemon.runInternal(Daemons.java:220)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - locked <0x02dce885> (a java.lang.Class<java.lang.ref.ReferenceQueue>)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$Daemon.run(Daemons.java:142)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.run(Thread.java:919)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "HeapTaskDaemon" prio=5 tid=10 WaitingForTaskProcessor
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc9508 self=0x7841a53400
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20194 nice=4 cgrp=default sched=0/0 handle=0x77b0912d50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 208584374 42755105 212 ) utm=18 stm=2 core=6 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b0810000-0x77b0812000 stackSize=1039KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20194/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 0000000000080340 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+32)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c64c /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::TimedWait(art::Thread*, long, int)+168)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 0000000000290380 /apex/com.android.runtime/lib64/libart.so (art::gc::TaskProcessor::GetTask(art::Thread*)+508)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 0000000000290bcc /apex/com.android.runtime/lib64/libart.so (art::gc::TaskProcessor::RunAllTasks(art::Thread*)+92)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at dalvik.system.VMRuntime.runHeapTasks(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$HeapTaskDaemon.runInternal(Daemons.java:552)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$Daemon.run(Daemons.java:142)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.run(Thread.java:919)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "FinalizerDaemon" prio=5 tid=11 Waiting
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc04a8 self=0x77aac8a800
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20196 nice=4 cgrp=default sched=0/0 handle=0x77b06dfd50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 5692343 1991668 18 ) utm=0 stm=0 core=6 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b05dd000-0x77b05df000 stackSize=1039KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20196/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000008033c /apex/com.android.runtime/lib64/bionic/libc.so (syscall+28)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c214 /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+164)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 000000000040cbb4 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, long, int, bool, art::ThreadState)+620)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000040e7e0 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, art::ObjPtrart::mirror::Object, long, int, bool, art::ThreadState)+284)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Object.wait(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - waiting on <0x02c5b6da> (a java.lang.Object)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Object.wait(Object.java:442)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:190)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - locked <0x02c5b6da> (a java.lang.Object)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:211)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$FinalizerDaemon.runInternal(Daemons.java:276)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$Daemon.run(Daemons.java:142)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.run(Thread.java:919)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "FinalizerWatchdogDaemon" prio=5 tid=12 Sleeping
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc0520 self=0x77aac8c400
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20197 nice=4 cgrp=default sched=0/0 handle=0x77b05d4d50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 1257399 2435728 13 ) utm=0 stm=0 core=1 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b04d2000-0x77b04d4000 stackSize=1039KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20197/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 0000000000080340 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+32)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c64c /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::TimedWait(art::Thread*, long, int)+168)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 000000000040cbc8 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, long, int, bool, art::ThreadState)+640)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000040e7e0 /apex/com.android.runtime/lib64/libart.so (art::Monitor::Wait(art::Thread*, art::ObjPtrart::mirror::Object, long, int, bool, art::ThreadState)+284)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.sleep(Native method)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - sleeping on <0x0281770b> (a java.lang.Object)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.sleep(Thread.java:440)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] - locked <0x0281770b> (a java.lang.Object)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.sleep(Thread.java:356)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$FinalizerWatchdogDaemon.sleepForMillis(Daemons.java:393)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$FinalizerWatchdogDaemon.waitForFinalization(Daemons.java:440)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$FinalizerWatchdogDaemon.runInternal(Daemons.java:328)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Daemons$Daemon.run(Daemons.java:142)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.lang.Thread.run(Thread.java:919)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "Binder:19421_1" prio=5 tid=13 Native
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc0598 self=0x77aad74000
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20198 nice=0 cgrp=default sched=0/0 handle=0x77b02bbd50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 29031458 28995471 106 ) utm=1 stm=1 core=7 HZ=100
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | stack=0x77b01c5000-0x77b01c7000 stackSize=991KB
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | held mutexes=
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20198/stack)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 000000000008033c /apex/com.android.runtime/lib64/bionic/libc.so (syscall+28)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000014c214 /apex/com.android.runtime/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+164)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 000000000036c5e8 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, std::__va_list, art::Primitive::Type, art::InvokeType)+484)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #3 pc 000000000035cf68 /apex/com.android.runtime/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallStaticVoidMethodV(_JNIEnv*, _jclass*, _jmethodID*, std::__va_list)+76)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #4 pc 00000000000c099c /system/lib64/libandroid_runtime.so (_JNIEnv::CallStaticVoidMethod(_jclass*, _jmethodID*, ...)+116)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #5 pc 0000000000170480 /system/lib64/libandroid_runtime.so (android::JNISurfaceTextureContext::onFrameAvailable(android::BufferItem const&)+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #6 pc 000000000009c3d4 /system/lib64/libgui.so (android::ConsumerBase::onFrameAvailable(android::BufferItem const&)+172)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #7 pc 0000000000068a28 /system/lib64/libgui.so (android::BufferQueue::ProxyConsumerListener::onFrameAvailable(android::BufferItem const&)+104)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #8 pc 0000000000072e88 /system/lib64/libgui.so (android::BufferQueueProducer::queueBuffer(int, android::IGraphicBufferProducer::QueueBufferInput const&, android::IGraphicBufferProducer::QueueBufferOutput*)+1904)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #9 pc 000000000007fad0 /system/lib64/libgui.so (android::BnGraphicBufferProducer::onTransact(unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+1740)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #10 pc 000000000004c678 /system/lib64/libbinder.so (android::BBinder::transact(unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+136)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #11 pc 0000000000059798 /system/lib64/libbinder.so (android::IPCThreadState::executeCommand(int)+992)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #12 pc 0000000000059304 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+156)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #13 pc 0000000000059a58 /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #14 pc 000000000007fd3c /system/lib64/libbinder.so (android::PoolThread::threadLoop()+24)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #15 pc 00000000000135f0 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+328)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #16 pc 00000000000c3c5c /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #17 pc 00000000000e68a0 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+36)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #18 pc 0000000000084b6c /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] (no managed stack frames)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "Binder:19421_2" prio=5 tid=14 Native
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc0610 self=0x77af87e000
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20199 nice=0 cgrp=default sched=0/0 handle=0x77b01b3d50
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #1 pc 000000000008b6b0 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+132)
2020-09-24 11:56:36.671 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | sysTid=20204 nice=0 cgrp=default sched=0/0 handle=0x77b00abd50
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] "HybridData DestructorThread" prio=5 tid=24 Waiting
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 4475315 11654376 20 ) utm=0 stm=0 core=6 HZ=100
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20233/stack)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/20238/stack)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at okhttp3.internal.connection.RealConnectionPool.lambda$new$0$RealConnectionPool(RealConnectionPool.java:62)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #00 pc 00000000000d0f58 /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+8)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 270816305 61516872 343 ) utm=22 stm=4 core=7 HZ=100
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at android.os.HandlerThread.run(HandlerThread.java:67)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | group="" sCount=1 dsCount=0 flags=1 obj=0x12dc7048 self=0x774148f000
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] | state=S schedstat=( 6240314 4198804 32 ) utm=0 stm=0 core=6 HZ=100
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] kernel: (couldn't read /proc/self/task/24738/stack)
2020-09-24 11:56:36.672 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #2 pc 0000000000443d04 /apex/com.android.runtime/lib64/libart.so (art::Unsafe_park(_JNIEnv*, _jobject*, unsigned char, long)+612)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630]
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #20 pc 0000000000192d6a [anon:dalvik-classes3.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes3.dex] (org.tensorflow.lite.task.vision.detector.ObjectDetector.detect+90)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #23 pc 0000000000192cf4 [anon:dalvik-classes3.dex extracted in memory from /data/app/com.native_ai-oedDdix8WB1PkyjCEJYOtA==/base.apk!classes3.dex] (org.tensorflow.lite.task.vision.detector.ObjectDetector.detect+16)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #62 pc 000000000030b460 /system/framework/framework.jar (android.os.HandlerThread.run+56)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] native: #70 pc 00000000004f27f4 /apex/com.android.runtime/lib64/libart.so (art::Thread::CreateCallback(void*)+1176)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.models.SSD.runModel(SSD.java:54)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at com.exercises.base.exercise.ExerciseActivity.processImage(ExerciseActivity.java:113)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:630] at java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detect(org.tensorflow.lite.support.image.TensorImage) (ObjectDetector.java:292)
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:638] JNI DETECTED ERROR IN APPLICATION: JNI NewStringUTF called with pending exception java.lang.NoSuchMethodError: no static method "Lorg/tensorflow/lite/support/label/Category;.create(Ljava/lang/String;Ljava/lang/String;F)Lorg/tensorflow/lite/support/label/Category;"
2020-09-24 11:56:36.673 19421-24788/com.native_ai A/com.native_ai: runtime.cc:638] at java.util.List org.tensorflow.lite.task.vision.detector.ObjectDetector.detectNative(long, java.nio.ByteBuffer, int, int, int) (ObjectDetector.java:-2)
2020-09-24 11:56:36.674 19421-24788/com.native_ai A/libc: Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 24788 (ImageListener), pid 19421 (com.native_ai)

Any Idea's how to solve this?

library error

fatal error: tensorflow_lite_support/cc/port/statusor.h: No such file or directory

I could not able to capture the image on the fragments

Hi I have integrated the tensor flow sdk in android, I want to capture the image from the camera with detector. How do i achieve this.
I have written the code on Camera Fragement, But it not works
Kindly give some suggestion.

RESHAPE failed to prepare when invoking tf.signal.stft in tflite

I am building a flutter app that needs to record an audio and predict some label using a tflite model I built. For linking the audio recording and tflite I use the flutter plugin tf-lite audio (https://github.com/Caldarie/flutter_tflite_audio).

The tensorflow model works on colab but when I launch the app and inference happens hence when it calls interpreter.invoke(), the following error occurs:

TensorFlow Lite Error: tensorflow/lite/kernels/reshape.cc:58 stretch_dim != -1 (0 != -1)
TensorFlow Lite Error: Node number 26 (RESHAPE) failed to prepare.
Failed to invoke the interpreter with error: Must call allocateTensors().
2
Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value: file tflite_audio/SwiftTfliteAudioPlugin.swift, line 290
* thread #2, queue = 'conversionQueue', stop reason = Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value
    frame #0: 0x00000001a672ee08 libswiftCore.dylib`_swift_runtime_on_report
libswiftCore.dylib`_swift_runtime_on_report:
->  0x1a672ee08 <+0>: ret
libswiftCore.dylib`_swift_reportToDebugger:
    0x1a672ee0c <+0>: b      0x1a672ee08               ; _swift_runtime_on_report
libswiftCore.dylib`_swift_shouldReportFatalErrorsToDebugger:
    0x1a672ee10 <+0>: adrp   x8, 341475
    0x1a672ee14 <+4>: ldrb   w0, [x8, #0x7c8]
Target 0: (Runner) stopped.
Lost connection to device.

Here is the tflite model I use.
my_trivial_stft_model_1_input.tflite.zip

On netron here is the problematic node:
Screenshot 2021-02-23 at 12 16 18

It looks like it is only squeezing the first dimension so maybe it cannot because as you can see on the following summary of my model the first dimension is None, I tried some tricks to avoid having this None but I am not familiar enough with tensorflow to be sure about the validity of the operations I am doing

Screenshot 2021-02-23 at 12 18 34

I have boiled down my model to the minimal size and this node is between these 2 lines of code, so I suspect the stft function to do this reshaping but have no idea.

spectrograms = tf.signal.stft(waveforms,
                                     frame_length=self.fft_size,
                                     frame_step=self.hop_size,
                                     pad_end=False)

       magnitude_spectrograms = tf.abs(spectrograms)

Can anyone help on this issue?
Thanks!

Bazel run failed on nl_classifier_demo

Device Details:

OS: Ubuntu 16.04.7 LTS
Bazel: 3.1.0
GCC: gcc (Ubuntu 7.5.0-3ubuntu116.04) 7.5.0
G++: g++ (Ubuntu 7.5.0-3ubuntu1
16.04) 7.5.0
Python3: 3.5.2
Python: 2.7.12

Command
bazel-3.1.0 run -c opt tensorflow_lite_support/examples/task/text/desktop:nl_classifier_demo -- --model_path=/tmp/movie_review.tflite --text="What a waste of my time." --input_tensor_name="input_text" --output_score_tensor_name="probability" --sandbox_debug --verbose_failures

I am trying to run TFLite Support Task Library on my laptop. I tried to run NLClassifier by referring to this documentation, but I got the following error.

ERROR: /home/sasuke/tflite-support/tensorflow_lite_support/cc/task/text/nlclassifier/BUILD:12:1: C++ compilation of rule '//tensorflow_lite_support/cc/task/text/nlclassifier:nl_classifier' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 135 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
tensorflow_lite_support/cc/task/text/nlclassifier/nl_classifier.cc: In function 'tflite::support::StatusOr<std::unique_ptr<tflite::support::text::tokenizer::Tokenizer> > tflite::task::text::nlclassifier::{anonymous}::CreateRegexTokenizerFromProcessUnit(const tflite::ProcessUnit*, const tflite::metadata::ModelMetadataExtractor*)':
tensorflow_lite_support/cc/task/text/nlclassifier/nl_classifier.cc:136:10: error: could not convert 'regex_tokenizer' from 'std::unique_ptr<tflite::support::text::tokenizer::RegexTokenizer>' to 'tflite::support::StatusOr<std::unique_ptr<tflite::support::text::tokenizer::Tokenizer> >'
   return regex_tokenizer;
          ^~~~~~~~~~~~~~~
Target //tensorflow_lite_support/examples/task/text/desktop:nl_classifier_demo failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 171.710s, Critical Path: 41.22s
INFO: 326 processes: 326 linux-sandbox.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully

This error also described in bazel build failed on nl_classifier_demo #211. But he used a different command.

Android: RGB bitmap from YUV image

v0.2.0 introduced support for YUV android.media.Image as part of the tensorflow-lite-task-vision artifact. Seems like it uses libyuv under the hood. As far as I understand TensorImage delegates to MediaImageContainer in this case. At the moment of writing it doesn’t support the getBitmap method, throwing an exception.

We have such use case:

  1. take a YUV image from camera;
  2. convert and rotate the image to a RGB Bitmap via RenderScript;
  3. pass the Bitmap to TF;
  4. save the Bitmap as a JPEG file alongside detections done by TF.

We don’t grab an RGB bitmap from camera directly since it takes much more time than grabbing a YUV image and processing it via RenderScript. However, the Android 12 tooling deprecates RenderScript.

It would be great if the MediaImageContainer.getBitmap method returned a valid Bitmap. I imagine in this scenario our use case might avoid using RenderScript. As far as I understood, the byte buffer for rotated and converted image exists during the TF processing already, it just needs to be shifted to a Bitmap on demand.

If this is not what TF wants to support — can I ask for pointers on best practices regarding something like this? Meaning reading the camera frame, processing it via TF and then saving the frame.

Problem about different tflite model's invoke time cost?

I succesfully convert a object detect model and a semantic segmentation keras model(.h5) into tensorflowlite model(.tflite) ,but i found that when i did inference,the segmentation model cost more than 4 seconds to invoke() but object model just need less than 0.1seconds

below is inference code of segmentation model

import numpy as np
import cv2
import tensorflow as tf
import time
import os
def preprocess(x):
    x = x.astype(np.float32)
    x /= 255.0
    mean = [0.485, 0.456, 0.406]
    std = [0.229, 0.224, 0.225]
    x[..., 0] -= mean[0]
    x[..., 1] -= mean[1]
    x[..., 2] -= mean[2]
    if std is not None:
        x[..., 0] /= std[0]
        x[..., 1] /= std[1]
        x[..., 2] /= std[2]
    return x
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="model_segment_vl_512x1024.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test model on random input data.
input_shape = input_details[0]['shape']

height = 1024
width = 512
path = './images/5x/VL_5x/'
img_list = os.listdir(path)
st = time.time()
for i,img_name in enumerate(img_list):
    if i>4:
        break
    img_path = path + img_name
    I = preprocess(cv2.imread(img_path))#[:,64:576,:]
    I0 = cv2.resize(I, (width, height))
    
    interpreter.set_tensor(input_details[0]['index'], I0.reshape((1, width, height, 3)).astype(np.float32))
    
    interpreter.invoke()
   
    # The function `get_tensor()` returns a copy of the tensor data.
    # Use `tensor()` in order to get a pointer to the tensor.
    
    output_data = interpreter.get_tensor(output_details[0]['index'])
     
    # print(output_data)

    # cv2.imshow('test0', np.squeeze(output_data))
    # cv2.waitKey(-1)
print(time.time()-st)    

5 images cost 22.35933494567871s,avg time cost ≈4.4s

below is inference code of object detect model

import tensorflow as tf

from math import pi
import time
import numpy as np
import cv2

weights_name = 'model_VL.tflite'
w = 224
h = 224

interpreter = tf.lite.Interpreter(model_path=weights_name)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
image = cv2.imread('images/3x/VL_3x/0020430.png')
image_out = image
H = image_out.shape[0]
DW = int((image_out.shape[1]-image_out.shape[0])/2)
DH = 0
count = 0
st = time.time()
for i in range(3):
    O = cv2.resize(image[DH:DH+H,DW:DW+H,:], (w, h))
    
    interpreter.set_tensor(input_details[0]['index'], 
        O.reshape((1, h, w, 3)).astype(np.float32))
    interpreter.invoke()        
    Y = interpreter.get_tensor(output_details[0]['index'])

    x = int(round((Y[0][2]/2+0.25)*H))+DW
    y = int(round((Y[0][1]/2+0.25)*H))+DH
    H = int(round(H*0.3))
    if Y[0][0]>0.5:
        image_out = cv2.rectangle(image_out, (x-H, y-H), (x+H, y+H), (0, 0, 255), 5)
        count += 1
    DW = x-H
    DH = y-H
    H = H*2
    if DW<0:
        DW = 0
    if DW + H>=image_out.shape[1]:
        DW = image_out.shape[1] - H
    if DH<0:
        DH = 0
    if DH + H>=image_out.shape[0]:
        DH = image_out.shape[0] - H
print(time.time()-st)

3 images cost 0.26386356353759766s,avg cost ≈0.09s

How can i speed up my segmentation model' invoke()?
Thanks!

KeyError: 'Assert_AssertGuard_true_12261'

Hello! Am getting the following error while trying to convert to tflite model.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-8-d9b9324f9c0d> in <module>()
      3 # tfjs.graph_model_to_saved_model("model/model.json", "tf_saved_model")
      4 
----> 5 converter = tf.lite.TFLiteConverter.from_saved_model("tf_saved_model")
      6 tflite_model = converter.convert()
      7 

5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/function_deserialization.py in fix_node_def(node_def, functions, shared_name_suffix, debug_name)
    381   for _, attr_value in node_def.attr.items():
    382     if attr_value.func.name:
--> 383       attr_value.func.name = functions[attr_value.func.name].name
    384 
    385   # Fix old table creation bug.

KeyError: 'Assert_AssertGuard_true_12261'

Code to convert the model:

import tfjs_graph_converter.api as tfjs
import tensorflow as tf
tfjs.graph_model_to_saved_model("model/model.json", "tf_saved_model")

converter = tf.lite.TFLiteConverter.from_saved_model("tf_saved_model")
tflite_model = converter.convert()

# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
  f.write(tflite_model)

Error running Bert Question Answerer

Hello!

I tried using the Bert Question Answerer Demo exactly as instructed here.

I installed bazel from bazelisk, downloaded the tflite model with curl:

curl \
 -L 'https://tfhub.dev/tensorflow/lite-model/mobilebert/1/default/1?lite-format=tflite' \
 -o /tmp/mobilebert.tflite

And ran the classification tool:

bazel run -c opt \
 tensorflow_lite_support/examples/task/text/desktop:bert_question_answerer_demo -- \
 --model_path=/tmp/mobilebert.tflite \
 --question="Where is Amazon rainforest?" \
 --context="The Amazon rainforest, alternatively, the Amazon Jungle, also known in \
English as Amazonia, is a moist broadleaf tropical rainforest in the Amazon \
biome that covers most of the Amazon basin of South America. This basin \
encompasses 7,000,000 km2 (2,700,000 sq mi), of which \
5,500,000 km2 (2,100,000 sq mi) are covered by the rainforest. This region \
includes territory belonging to nine nations."

However, I'm getting the error No input process unit found from metadata.:

INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=211
INFO: Reading rc options for 'run' from /mnt/c/Users/felip/Documents/Projetos-weg/tflite-support/.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'run' from /mnt/c/Users/felip/Documents/Projetos-weg/tflite-support/.bazelrc:
  Inherited 'build' options: --apple_platform_type=macos --enable_platform_specific_config --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --action_env ANDROID_NDK_HOME --action_env ANDROID_NDK_API_LEVEL --action_env ANDROID_BUILD_TOOLS_VERSION --action_env ANDROID_SDK_API_LEVEL --action_env ANDROID_SDK_HOME --define framework_shared_object=true --define open_source_build=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=short_logs --config=v2
INFO: Found applicable config definition build:short_logs in file /mnt/c/Users/felip/Documents/Projetos-weg/tflite-support/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /mnt/c/Users/felip/Documents/Projetos-weg/tflite-support/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:linux in file /mnt/c/Users/felip/Documents/Projetos-weg/tflite-support/.bazelrc: --copt=-w --cxxopt=-std=c++14 --host_cxxopt=-std=c++14
INFO: Analyzed target //tensorflow_lite_support/examples/task/text/desktop:bert_question_answerer_demo (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //tensorflow_lite_support/examples/task/text/desktop:bert_question_answerer_demo up-to-date:
  bazel-bin/tensorflow_lite_support/examples/task/text/desktop/bert_question_answerer_demo
INFO: Elapsed time: 0.190s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow_lite_support/examples/task/text/desktop/bert_question_answerer_demo '--model_path=/tmp/mobilebert.tflite' '--question=Where is Amazon rainforest?' '--context=The Amazon rainforest, alternatively, the Amazon Jungle, also known in English as Amazonia, is a moist broadleaf tropical rainforest in the Amazon biome that covers most of the Amazon basin of South America. This baINFO: Build completed successfully, 1 total action
Answer failed: No input process unit found from metadata.

Is there something I'm doing wrong?
Thank you in advance!

bazel: 3.7.2
running on wsl2

convert tflite to tensorflow graph

Hello all,

I am new to this world so hopefully my question will be as clear as possible and related to this forum.
I want to be able to convert tflite file to tensorflow graph.
I have tried several ways:

tflite --> onnx : I used
tflite2onnx.convert(tflite_path, onnx_path)
Result:
NotImplementedError: Unsupported TFLite OP: 41
It seems there is an unsupported operations (maybe more) and I was unable to convert

  1. tflite --> pb
    I looked on the net and found that it is possible only up to tf 1.9 so I downgraded my tf to 1.9
    and tried following the instructions (which included using bazel, couldn't even install bazel)

Anyway Is there a known way (either one of the above or any other you can think of)
that I can use?

Thanks in advance
Chen

Cannot convert tensorflow model (resnetv1_50 based) to tflite

Hello! I trained a model based on resnet50 with Tensorflow.
After saving the model in checkpoint format (checkpoint, index, meta) I'm using the same session to convert the model to tflite.

The code is the following:

`from tensorflow import lite
## Training model code is omitted here ##

saver.save(sess, path_to_save, global_step=it)
        
        # Converting a GraphDef from session.
        converter = lite.TFLiteConverter.from_session(sess, list(batch.values()), posenet.output_tensors)
        converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
        converter.allow_custom_ops=True
        tflite_model = converter.convert()
        open("./converted_model.tflite", "wb").write(tflite_model)
`

These are my inputs:
[<tf.Tensor 'fifo_queue_Dequeue:0' shape=(1, 200, 200, 3) dtype=float32>,
<tf.Tensor 'fifo_queue_Dequeue:1' shape=(1, 26, 26, 21) dtype=float32>,
<tf.Tensor 'fifo_queue_Dequeue:2' shape=(1, 26, 26, 21) dtype=float32>,
<tf.Tensor 'fifo_queue_Dequeue:3' shape=(1, 26, 26, 42) dtype=float32>,
<tf.Tensor 'fifo_queue_Dequeue:4' shape=(1, 26, 26, 42) dtype=float32>]

These are my outputs:
[<tf.Tensor 'pose/part_pred/block4/BiasAdd:0' shape=(1, 26, 26, 21) dtype=float32>,

<tf.Tensor 'pose/locref_pred/block4/BiasAdd:0' shape=(1, 26, 26, 42) dtype=float32>]

I'm showing the error output below, any help will be useful:

`---------------------------------------------------------------------------
ConverterError                            Traceback (most recent call last)
<ipython-input-10-458b03f29263> in <module>
     30         converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
     31         converter.allow_custom_ops=True
---> 32         tflite_model = converter.convert()
     33 #         open("./converted_model.tflite", "wb").write(tflite_model)
     34 #         with open('model.tflite', 'wb') as f:

~\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\lite\python\lite.py in convert(self)
    981           input_tensors=self._input_tensors,
    982           output_tensors=self._output_tensors,
--> 983           **converter_kwargs)
    984     else:
    985       result = _toco_convert_graph_def(

~\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
    447       input_data.SerializeToString(),
    448       debug_info_str=debug_info_str,
--> 449       enable_mlir_converter=enable_mlir_converter)
    450   return data
    451 

~\AppData\Roaming\Python\Python36\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    198       stdout = _try_convert_to_unicode(stdout)
    199       stderr = _try_convert_to_unicode(stderr)
--> 200       raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
    201   finally:
    202     # Must manually cleanup files.

ConverterError: See console for info.
El sistema no puede encontrar la ruta especificada.
2021-02-01 17:29:45.263535: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2021-02-01 17:29:45.264019: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
c:\users\pipita\anaconda3\envs\aws_train\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2021-02-01 17:29:48.157783: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: FIFOQueueV2
2021-02-01 17:29:48.158185: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2021-02-01 17:29:48.290435: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: QueueDequeueV2
2021-02-01 17:29:48.322668: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 685 operators, 1055 arrays (0 quantized)
2021-02-01 17:29:48.343643: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 683 operators, 1054 arrays (0 quantized)
2021-02-01 17:29:48.367174: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 683 operators, 1054 arrays (0 quantized)
2021-02-01 17:29:48.582688: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 92 operators, 231 arrays (0 quantized)
2021-02-01 17:29:48.586252: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 92 operators, 231 arrays (0 quantized)
2021-02-01 17:29:48.588560: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 92 operators, 231 arrays (0 quantized)
2021-02-01 17:29:48.593331: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 10880000 bytes, theoretical optimal value: 7680000 bytes.
2021-02-01 17:29:48.594149: I tensorflow/lite/toco/toco_tooling.cc:439] Estimated count of arithmetic ops: 10608315996 ops, equivalently 5304157998 MACs
2021-02-01 17:29:48.594470: I tensorflow/lite/toco/toco_tooling.cc:454] Number of parameters: 24644870`

tf.keras model converted to tflite returns just NAN

System information

OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
TensorFlow installed from (source or binary): pip
TensorFlow version: 2.3

Hey everybody,
i am still relatively inexperienced with tensorflow lite and would like to convert a model with custom training function into a tflite model in order to run it on a google coral accelerator if possible. I am training the model on a NVIDIA Tesla V100 and would like to convert the saved model/graph (.pb, h5) into a tflite.
My model is almost identical to the one in cycleGan Tutorial.
After the conversion into the tflite format I get only "NAN" as model output.
I have also tried to convert the model from the tutorial into tflite and the same error occurs.
I have also tried different conversion types, Tensorflow versions etc. and nothing worked.
Are there possibly any functions in it which tflite does not support?

Information to the files (Link):
The Notebook tf_cyclehorse.ipynb is used to train the model and to create the .h5 file.
To convert the .h5 file to a .tflite file I use a script similar to conv_tflite.py.
(to run the file you use the console comand similar to: python conv_tflite.py --model model.h5 --output model.tflite)
tflite_test.ipynb is a simple notebook to test the return of the tflite model.
Thereby the image horse.jpg and the model model.tflite is read in. The output of the errors is then reproduced.

Follow the google drive link:
There you find all the notebooks, scripts, models and images. Almost everything you need.

The files are also available on github.
https://github.com/ptrem/cycleGan_tf_tutorial

Thanks a lot in advance.

What is the meaning of score (e.g. 8.109188)?

I am trying to understand what score means, its not a probability as you can see in the debugger in my ide.

For example, shoe shop has a score of 8.109188, what does that mean? Thank you
Screenshot 2021-04-12 at 12 23 29

TFLite Support Codegen Tool: Quantized models don't output probabilities

I am trying to understand why the denormalised (or dequantized) output from my quantized model does not lie between 0 and 1, as they should be probabilities. Instead, they are sometimes negative float numbers, and sometimes numbers larger than 10.

I used the TFLite support codegen tool to generate the class, so I did not mess with this class. I have also found that MLKit, TFLite task library do not provide probabilities (numbers between 0 and 1).

Here's what I found:

  1. Probabilities are not 0. See here
  2. I discovered that this is being calculated by the probabilityPostprocessor: TensorProcessor
  3. This is set by fun resetProbabilityPostprocessor
  4. This setter is passed the following as argument instance.buildDefaultProbabilityPostprocessor()
  5. The code in buildDefaultProbabilityPostprocessor is:
    private fun buildDefaultProbabilityPostprocessor(): TensorProcessor {
        return TensorProcessor.Builder()
                .add(DequantizeOp(
                        metadata.probabilityQuantizationParams.zeroPoint.toFloat(),
                        metadata.probabilityQuantizationParams.scale))
                .build()
    }
  1. The zeroPoint and scale values are taken from probabilityQuantizationParams
  2. probabilityQuantizationParams is set with probabilityTensor.quantizationParams()
  3. And finally, val probabilityTensor = model.getOutputTensor(0).

The last step, step 8 seems incorrect? Finally, I discovered that the quantization (and also dequantization) parameters are taken from the output tensor of the model. What does this mean? How does getting the output tensor of a model (trained or untrained) provide any information about quantization?

Thank you, :)

bazel build failed on nl_classifier_demo

source: commit 1202da8

OS: Ubuntu 18.04.5

bazel: 3.7.0

c++ toolchain: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

comand:

bazel build tensorflow_lite_support/examples/task/text/desktop:nl_classifier_demo

error message:

tensorflow_lite_support/cc/task/text/nlclassifier/nl_classifier.cc: In function 'tflite::support::StatusOr<std::unique_ptr<tflite::support::text::tokenizer::Tokenizer> > tflite::task::text::nlclassifier::{anonymous}::CreateRegexTokenizerFromProcessUnit(const tflite::ProcessUnit*, const tflite::metadata::ModelMetadataExtractor*)':
tensorflow_lite_support/cc/task/text/nlclassifier/nl_classifier.cc:136:10: error: could not convert 'regex_tokenizer' from 'std::unique_ptr<tflite::support::text::tokenizer::RegexTokenizer>' to 'tflite::support::StatusOr<std::unique_ptr<tflite::support::text::tokenizer::Tokenizer> >'
   return regex_tokenizer;
          ^~~~~~~~~~~~~~~

Cannot find 'MetalDelegate' in scope

When connecting the mobile phone to run the segmentation demo,It gives an error。
the code snippet:
#if targetEnvironment(simulator)
// Use CPU for inference as MetalDelegate does not support iOS simulator.
options = Interpreter.Options()
options?.threadCount = 2
#else
// Use GPU on real device for inference as this model is fully supported.
delegates = [MetalDelegate()]
#endif

Null is not an object evaluating (TfliteReactNative.loadModel)

dependencies": { "react": "16.13.1", "react-native": "0.63.0", "react-native-image-picker": "^2.3.2", "tflite-react-native": "0.0.5"
target 'TFDemo' do
pod 'TensorFlowLite', '1.12.0'
config = use_native_modules!

/**

Metro configuration for React Native
https://github.com/facebook/react-native
@Format
*/
module.exports = {
resolver: {
assetExts: ['tflite', 'txt']
Screenshot 2020-07-13 at 12 03 51 PM

}
}
Hey Folks ,
I'm facing this issue . Is this a bug or i'm missing any things

Which header files to include to run BERT qa task?

Hi,

You provide an example of how to run BERT qa task here

using tflite::task::text::qa::BertQuestionAnswerer;
using tflite::task::text::qa::QaAnswer;
// Create API handler with Mobile Bert model.
auto qa_client = BertQuestionAnswerer::CreateBertQuestionAnswererFromFile("/home/mohamed/tfLiteTransformers/lite-model_mobilebert_1_metadata_1.tflite", "/path/to/vocab");
// Or create API handler with Albert model.
// auto qa_client = BertQuestionAnswerer::CreateAlbertQuestionAnswererFromFile("/path/to/alBertModel", "/path/to/sentencePieceModel");


std::string context =
    "Nikola Tesla (Serbian Cyrillic: Никола Тесла; 10 "
    "July 1856 – 7 January 1943) was a Serbian American inventor, electrical "
    "engineer, mechanical engineer, physicist, and futurist best known for his "
    "contributions to the design of the modern alternating current (AC) "
    "electricity supply system.";
std::string question = "When was Nikola Tesla born?";
// Run inference with `context` and a given `question` to the context, and get top-k
// answers ranked by logits.
const std::vector<QaAnswer> answers = qa_client->Answer(context, question);
// Access QaAnswer results.
for (const QaAnswer& item : answers) {
  std::cout << absl::StrFormat("Text: %s logit=%f start=%d end=%d", item.text,
                               item.pos.logit, item.pos.start, item.pos.end)
            << std::endl;
}
// Output:
// Text: 10 July 1856 logit=16.8527 start=17 end=19
// ... (and more)
//
// So the top-1 answer is: "10 July 1856".

What are the header files that I should include so that it could run?

-- When I include "tensorflow_lite_support/cc/task/text/qa/bert_question_answerer.h" I get the error:
class "std::unique_ptr<tflite::task::text::qa::QuestionAnswerer, std::default_delete<tflite::task::text::qa::QuestionAnswerer>>" has no member "Answer"

-- When I add "tensorflow_lite_support/cc/task/text/qa/bert_qa_c_api.h" to solve the above error I got a different error:
incomplete type is not allowed pointing to "BertQuestionAnswerer::CreateBertQuestionAnswererFromFile"

Thanks,
Mohamed

TFLite Model personalization on IoT devices

Hello tflite-support team!

Thank you for building an awesome project.

Do you have roadmap visibility for model personalization (transfer learning) on IoT devices (e.g. rpi) ? The feature has been available for Android since 2019 and the blog post hints to an upcoming implementation for other platforms.

Future work
The evolution of the transfer learning pipeline in the future is likely to be coupled with the development of the full training solution in TensorFlow Lite. Today we provide the transfer learning pipeline as a separate example on GitHub, and in the future we plan to support full training. The transfer learning converter would then be adapted to produce a single TensorFlow Lite model that would be able to run without an additional runtime library.

Thank you,

Ivelin

Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2)

We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator.

So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply?

You can find our pseudo-code here, we are stuck at step 2 and 6 :

import tensorflow as tf
 
#Prelude
TF_MODEL_PATH_LITE = "/path/model.tflite"
 
interpreter = tf.lite.Interpreter(model_path=TF_MODEL_PATH_LITE)
interpreter.allocate_tensors()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
 
#1-Encode input, giving you indices
context_idx = tokenizer.encode("Hello world.", return_tensors = "tf")
 
#2-How to convert the context_idx to appropriate np.array ?
input_data = np.array(np.random.random_sample(input_shape), dtype=np.int32) #dummy input for now
 
#3- feed input
interpreter.set_tensor(input_details[0]['index'], input_data)
 
#4- Run model
interpreter.invoke()
 
#5- Get output as tensor
output_data = interpreter.get_tensor(output_details[0]['index'])
 
#6- How decode this np array to idx
output_idx=np.random.randint(100) #dummy for now ...
 
#7- Decode Output from idx to word
string_tf = tokenizer.decode(output_idx, skip_special_tokens=True)


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.