Comments (7)
Closing the issue. Feel free to reopen.
from tflite-support.
@lu-wang-g Hi Lu, it seems to be an issue related to GPT2Tokenizer, do you know which team maintains the GPT2Tokenizer?
from tflite-support.
You should be able to do step 2 by copying date from context_idx to input_data. Let me know what your problem is when doing so.
from tflite-support.
Dear @lu-wang-g ,
Many thanks for your answer.
When we try what you suggest, this is what we get:
ValueError Traceback (most recent call last)
<ipython-input-40-d8a95a34331a> in <module>()
24
25 #3- feed input
---> 26 interpreter.set_tensor(input_details[0]['index'], input_data)
27
28 #4- Run inference
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py in set_tensor(self, tensor_index, value)
570 ValueError: If the interpreter could not set the tensor.
571 """
--> 572 self._interpreter.SetTensor(tensor_index, value)
573
574 def resize_tensor_input(self, input_index, tensor_size, strict=False):
ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 5 for dimension 1 of input 0.
So let me rephrase the questions we have now:
About the shapes of the input/output tensors:
I am not sure I understand what this 5 stands for in the input data shape printed. Should it not be the length of the sequence which may vary? Or is it a minimum length ?
Similarly, what are the output shape dimensions standing for ? Should not it be the size of the dictionary, and the length of the input fed?
About the input encoding:
Shall we encode the inputs in a specific way beforehand and feed encoder_outputs to the interpreter.set_tensors?
I saw that in hugging face library they first pass it to an encoder before feeding it to the model. Cf this link:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py#L320
Is there a simple way to do this similarly in this context?
Is it a one-hot encoding we need simply?
About the output decoding:
Is the next_token_logits right as such ? I am not sure which part of output_data should be taken to be sampled (here, greedily).
In order to get the next token index. It seems it produce a string instead of one token currently.
You can find a colab with our process here:
https://colab.research.google.com/drive/1dPzO058qtS0VHO3BmrrJgGLxXjrH6CHd?usp=sharing
from tflite-support.
Dear @lu-wang-g ,
Many thanks for your answer.
When we try what you suggest, this is what we get:ValueError Traceback (most recent call last) <ipython-input-40-d8a95a34331a> in <module>() 24 25 #3- feed input ---> 26 interpreter.set_tensor(input_details[0]['index'], input_data) 27 28 #4- Run inference /usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py in set_tensor(self, tensor_index, value) 570 ValueError: If the interpreter could not set the tensor. 571 """ --> 572 self._interpreter.SetTensor(tensor_index, value) 573 574 def resize_tensor_input(self, input_index, tensor_size, strict=False): ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 5 for dimension 1 of input 0.
So let me rephrase the questions we have now:
About the shapes of the input/output tensors:
I am not sure I understand what this 5 stands for in the input data shape printed. Should it not be the length of the sequence which may vary? Or is it a minimum length ?
According to the error message, the input shape is [batch, 5]. Your model doesn't seem to accept variable input length. You can verify it by printing out input_shape
, where input_shape = input_details[0]['shape']
, and see if each dimension has value other than -1. Normally for NLP models, it (5 in this case) means the maximum number of ids accepted. If your number of ids is less than 5, pad it with the PAD tokens in the vocabulary file. See our text classification app for example. Also see the input processing logic.
Similarly, what are the output shape dimensions standing for ? Should not it be the size of the dictionary, and the length of the input fed?
It depends on what your output tensor is. Seems like your model is a text-to-text model. So the output contains the ids of tokens that represent the output text. Similarly, if the output shape is [batch, N], N will be the maximum ids that can return by this model.
About the input encoding:
Shall we encode the inputs in a specific way beforehand and feed encoder_outputs to the interpreter.set_tensors?
I saw that in hugging face library they first pass it to an encoder before feeding it to the model. Cf this link:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py#L320
Is there a simple way to do this similarly in this context?
Not that I'm aware of. What is the encoder in your link? Do you have pointers?
Is it a one-hot encoding we need simply?
Inference should use exactly the same encoder/decoder as the training script, and encoding varies from model to model.
About the output decoding:
Is the next_token_logits right as such ? I am not sure which part of output_data should be taken to be sampled (here, greedily).
In order to get the next token index. It seems it produce a string instead of one token currently.You can find a colab with our process here:
https://colab.research.google.com/drive/1dPzO058qtS0VHO3BmrrJgGLxXjrH6CHd?usp=sharing
I'm out of context of what the output tensor is, i.e. what does each dimension in the shape (1, 5, 50257) referring to. It will be more helpful if you can provide information. In general, always refer to the training script for pre/post-processing. If you can point me to the training script, I can help to take a look as well.
from tflite-support.
Thanks @lu-wang-g, sorry I did not have time to look into it since last time.
from tflite-support.
@Guillaume-slize Hi, I know it is after a long time. Do you remember how you solved this issue with the output decoding?
from tflite-support.
Related Issues (20)
- Getting "Cannot copy to a TensorFlowLite tensor (serving_default_input_1:0) with 63984 bytes from a Java Buffer with 64000 bytes" error while attempting to pass the Input and Output TensorAudio Buffer to TFLite Interpreter for inference HOT 1
- ImportError from image_utils HOT 4
- Could you please provide a aarch64 tflite-support wheel for python 3.10 HOT 2
- YOLOv8 `tensorflow>=2.14.0` ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined HOT 6
- How to read .wav in adroid jni?
- `tf.lite`, or `tflite_tuntime.interpreter as `tflite`?
- TensorFlowLiteTaskAudio for iOS: How to save audio and then play it
- ERROR: Could not find a version that satisfies the requirement tflite-support==0.4.4 HOT 4
- Installation Error: Unable to find installation candidates for tflite-support (0.4.4) HOT 1
- Build TensorFlowLiteTaskVision_framework failed HOT 2
- How to handle dynamic output tensors with `tflite.runForMultipleInputsOutputs`
- iOS Error duplicate symbols HOT 2
- 0.4.2 and 0.4.3 versions of the pod are crashing on app launch on iOS 12.5.7
- Quantization with tflite : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int8))
- How to install tflite_support in pip by source compiling HOT 3
- Possibility of channel-by-channel image normalization when adding metadata
- Document TFLite Support vs MediaPipe HOT 2
- No 0.4.4 PyPi package available for Arm/Python 3.11 HOT 1
- Model Loading Error for Image Embedder in Android
- MobileNetV3 quantization
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tflite-support.