Comments (2)
π Hello @AhmedFkih, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Install
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
Environments
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
from ultralytics.
@AhmedFkih hello,
To convert a YOLOv10 model to TensorFlow Lite (TFLite) with INT8 quantization, you'll generally follow these steps:
-
Export YOLOv10 to ONNX or SavedModel: First, export your model to a compatible format like ONNX or TensorFlow's SavedModel.
-
Convert to TensorFlow Lite: Use TensorFlow's TFLite Converter to convert the model from ONNX/SavedModel to TFLite. During this step, you can specify the INT8 quantization.
-
Calibration: For INT8 quantization, you'll need to perform calibration using a representative dataset. This helps in accurately mapping the floating-point values to INT8.
Hereβs a basic example using TensorFlow's TFLite Converter:
import tensorflow as tf
# Load the SavedModel
model = tf.saved_model.load('path_to_saved_model')
# Set up the converter with INT8 quantization
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
# Convert the model
tflite_model = converter.convert()
# Save the TFLite model
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
For the representative_data_gen
, you need to provide a function that yields batches of input data from your dataset. This data is used to calibrate the quantization parameters.
For more detailed guidance and advanced configurations, please refer to the TensorFlow Lite documentation on model optimization and quantization.
Best of luck with your deployment!
β The Ultralytics Team
from ultralytics.
Related Issues (20)
- How can I use the segmentation models of previous versions? HOT 3
- yolov8-obb plot train labels maybe error HOT 2
- Error Code 2: Internal Error (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed. ) HOT 4
- Yolov10 Can't get attribute 'SCDown' on <module 'ultralytics.nn.modules.block' from 'C:\\Users\\ZHANG\\miniconda3\\lib\\site-packages\\ultralytics\\nn\\modules\\block.py'> HOT 20
- yolov8 -- After the cache is turned on, the memory occupied by reading val data is too large HOT 5
- YOLOv10 Performance Issue: Version 3.12 Fast, But 3.11 and Below Very Slow HOT 8
- yolo8 onnx in opencv HOT 2
- Is OBB available for yolov9 and v10 ? HOT 1
- Clamping in bbox2dist HOT 1
- Question about code of position embedding in rt-detr HOT 5
- Process group init fails when training YOLOv8 after successful tunning [Databricks] [single node GPU] HOT 4
- Train with single gpu HOT 3
- Yolo8-OnnxRuntime-CPP-Inference awful output HOT 6
- confusion matrix single HOT 3
- How to add the bounding box values to the labels text files during prediction with a trained YOLO-V8 instance segmentation model? HOT 4
- Class imabalance dataloader HOT 1
- Replace confidence score for forward pass for. yolov8. Default is 0.25 HOT 5
- The Yolov8 model is wrong in predicting probability HOT 9
- Superfluous line in Model HOT 2
- Re train yolov8n.pt to detect more objects from a custom dataset? HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ultralytics.