Giter VIP home page Giter VIP logo

Comments (3)

glenn-jocher avatar glenn-jocher commented on June 25, 2024

Hello! It sounds like you're trying to run inference simultaneously on two different cameras using two instances of the YOLO model. If you're experiencing performance issues, it might be due to the resources available on your machine, especially if you're using a single GPU or CPU.

To potentially improve performance, you can try running each model on a separate thread or process to better utilize your hardware. Here’s a simple example using Python's threading module:

import threading
from ultralytics import YOLO

def run_inference(model_path, image_path):
    model = YOLO(model_path)
    model.predict(image_path)

# Thread for the first camera
thread1 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img.jpg'))

# Thread for the second camera
thread2 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img2.jpg'))

thread1.start()
thread2.start()

thread1.join()
thread2.join()

This approach initializes each model in its own thread, potentially improving the utilization of your computational resources. Make sure your system has enough memory and processing power to handle multiple models simultaneously. If you continue to experience issues, consider using more powerful hardware or optimizing your model for better performance.

from ultralytics.

ohj666 avatar ohj666 commented on June 25, 2024
model1 = YOLO('model.pt')
model2 = YOLO('model.pt')

def infer(model, img_path):
    return model.predict(img_path)

thread1 = threading.Thread(target=infer, args=(model1, img_path))
thread2 = threading.Thread(target=infer, args=(model2, img_path))
thread1.start()
thread2.start()
thread1.join()
thread2.join()

Because my code is continuously inference, I don't want to reload the model every time, so I can write the method like this? However, I found in the test that even if the multi -threaded, for example, each diagram of the single -threaded diagram requires 100ms, and the two continuous single -threaded reasoning is 200ms, but when I use multi -threaded Each thread printed in the log requires 200ms

from ultralytics.

glenn-jocher avatar glenn-jocher commented on June 25, 2024

Hello! It looks like you're trying to run inference in parallel using threading, but you're not seeing any performance improvement. This issue might be due to Python's Global Interpreter Lock (GIL), which prevents multiple native threads from executing Python bytecodes at once. This can be particularly restrictive for CPU-bound tasks.

For better performance with parallel processing in Python, consider using the multiprocessing module instead of threading. This module bypasses the GIL by using separate memory spaces and processes:

from multiprocessing import Process
from ultralytics import YOLO

def infer(model_path, img_path):
    model = YOLO(model_path)
    return model.predict(img_path)

if __name__ == '__main__':
    process1 = Process(target=infer, args=('model.pt', 'img1.jpg'))
    process2 = Process(target=infer, args=('model.pt', 'img2.jpg'))
    process1.start()
    process2.start()
    process1.join()
    process2.join()

This approach should help you better utilize your hardware capabilities and see improved performance when running inference on multiple inputs simultaneously.

from ultralytics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.