Comments (3)
Hello! It sounds like you're trying to run inference simultaneously on two different cameras using two instances of the YOLO model. If you're experiencing performance issues, it might be due to the resources available on your machine, especially if you're using a single GPU or CPU.
To potentially improve performance, you can try running each model on a separate thread or process to better utilize your hardware. Hereβs a simple example using Python's threading
module:
import threading
from ultralytics import YOLO
def run_inference(model_path, image_path):
model = YOLO(model_path)
model.predict(image_path)
# Thread for the first camera
thread1 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img.jpg'))
# Thread for the second camera
thread2 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img2.jpg'))
thread1.start()
thread2.start()
thread1.join()
thread2.join()
This approach initializes each model in its own thread, potentially improving the utilization of your computational resources. Make sure your system has enough memory and processing power to handle multiple models simultaneously. If you continue to experience issues, consider using more powerful hardware or optimizing your model for better performance.
from ultralytics.
model1 = YOLO('model.pt')
model2 = YOLO('model.pt')
def infer(model, img_path):
return model.predict(img_path)
thread1 = threading.Thread(target=infer, args=(model1, img_path))
thread2 = threading.Thread(target=infer, args=(model2, img_path))
thread1.start()
thread2.start()
thread1.join()
thread2.join()
Because my code is continuously inference, I don't want to reload the model every time, so I can write the method like this? However, I found in the test that even if the multi -threaded, for example, each diagram of the single -threaded diagram requires 100ms, and the two continuous single -threaded reasoning is 200ms, but when I use multi -threaded Each thread printed in the log requires 200ms
from ultralytics.
Hello! It looks like you're trying to run inference in parallel using threading, but you're not seeing any performance improvement. This issue might be due to Python's Global Interpreter Lock (GIL), which prevents multiple native threads from executing Python bytecodes at once. This can be particularly restrictive for CPU-bound tasks.
For better performance with parallel processing in Python, consider using the multiprocessing
module instead of threading
. This module bypasses the GIL by using separate memory spaces and processes:
from multiprocessing import Process
from ultralytics import YOLO
def infer(model_path, img_path):
model = YOLO(model_path)
return model.predict(img_path)
if __name__ == '__main__':
process1 = Process(target=infer, args=('model.pt', 'img1.jpg'))
process2 = Process(target=infer, args=('model.pt', 'img2.jpg'))
process1.start()
process2.start()
process1.join()
process2.join()
This approach should help you better utilize your hardware capabilities and see improved performance when running inference on multiple inputs simultaneously.
from ultralytics.
Related Issues (20)
- Ultralytics support v9 and v10 at the same package? HOT 7
- Accurate Detection of Inner Corner Points in Parking Slots Using YOLOv8-Pose HOT 4
- reimplement YOLOv8-obb result on DOTA HOT 6
- Poor results and remove pretrained classes from training HOT 4
- Created a method to reproduce imgsz and rect, please check, for article. HOT 1
- Problem on detection of static objects HOT 3
- Trouble installing Ultralytics on the Jetson Nano HOT 5
- The ray tune result is incomplete HOT 2
- blurry details after resizing images to 640x360 HOT 2
- How to Perform Inference on YOLOv8 Model Deployed on Triton Inference Server using TritonClient? HOT 4
- Bounding Box Level Augmentations Recommendation HOT 1
- A question about Batch Shape Strategy HOT 1
- question about ultralytics formatting bot HOT 1
- A problem with slow first startup recognition using Gpus HOT 2
- results object attributes and methods missing HOT 4
- mulgpu training error HOT 1
- Error caused by missing data definition in default.yaml. RuntimeError: Trying to create tensor with negative dimension -881: [0, -881] HOT 3
- I have a custom YOLOv8 model for detecting small to medium-sized objects in images. To further enhance inference speed, I aim to prune the model such that it avoids larger object detection layers. This optimization is crucial as my input images consistently fall within small or medium sizes. HOT 7
- Inference Time Variation HOT 4
- YOLOv8-ONNXRuntime doesn't applied to yolo-obb HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ultralytics.