Giter VIP home page Giter VIP logo

codingmantras / yolov8-streamlit-detection-tracking Goto Github PK

View Code? Open in Web Editor NEW
226.0 5.0 99.0 64.87 MB

Object detection and tracking algorithm implemented for Real-Time video streams and static images.

Home Page: https://codingmantras-yolov8-streamlit-detection-tracking-app-njcqjg.streamlit.app/

Python 100.00%
machine-learning ml object-detection streamlit streamlit-application yolo yolov8 object-tracker tracking-algorithm tracking-by-detection

yolov8-streamlit-detection-tracking's Introduction

Real-time Object Detection and Tracking with YOLOv8 & Streamlit

This repository is an extensive open-source project showcasing the seamless integration of object detection and tracking using YOLOv8 (object detection algorithm), along with Streamlit (a popular Python web application framework for creating interactive web apps). The project offers a user-friendly and customizable interface designed to detect and track objects in real-time video streams from sources such as RTSP, UDP, and YouTube URLs, as well as static videos and images.

Explore Implementation Details on Medium (3 parts blog series)

For a deeper dive into the implementation, check out my three-part blog series on Medium, where I detail the step-by-step process of creating this web application.

WebApp Demo on Streamlit Server

Thank you team Streamlit for the community support for the cloud upload.

This app is up and running on Streamlit cloud server!!! You can check the demo of this web application on this link yolov8-streamlit-detection-tracking-webapp

Tracking With Object Detection Demo

Tracking-With_object-Detection-MOV.mov

Demo Pics

Home page

Page after uploading an image and object detection

Segmentation task on image

Requirements

Python 3.6+ YOLOv8 Streamlit

pip install ultralytics streamlit pytube

Installation

Usage

  • Run the app with the following command: streamlit run app.py
  • The app should open in a new browser window.

ML Model Config

  • Select task (Detection, Segmentation)
  • Select model confidence
  • Use the slider to adjust the confidence threshold (25-100) for the model.

One the model config is done, select a source.

Detection on images

  • The default image with its objects-detected image is displayed on the main page.
  • Select a source. (radio button selection Image).
  • Upload an image by clicking on the "Browse files" button.
  • Click the "Detect Objects" button to run the object detection algorithm on the uploaded image with the selected confidence threshold.
  • The resulting image with objects detected will be displayed on the page. Click the "Download Image" button to download the image.("If save image to download" is selected)

Detection in Videos

  • Create a folder with name videos in the same directory
  • Dump your videos in this folder
  • In settings.py edit the following lines.
# video
VIDEO_DIR = ROOT / 'videos' # After creating the videos folder

# Suppose you have four videos inside videos folder
# Edit the name of video_1, 2, 3, 4 (with the names of your video files) 
VIDEO_1_PATH = VIDEO_DIR / 'video_1.mp4' 
VIDEO_2_PATH = VIDEO_DIR / 'video_2.mp4'
VIDEO_3_PATH = VIDEO_DIR / 'video_3.mp4'
VIDEO_4_PATH = VIDEO_DIR / 'video_4.mp4'

# Edit the same names here also.
VIDEOS_DICT = {
    'video_1': VIDEO_1_PATH,
    'video_2': VIDEO_2_PATH,
    'video_3': VIDEO_3_PATH,
    'video_4': VIDEO_4_PATH,
}

# Your videos will start appearing inside streamlit webapp 'Choose a video'.
  • Click on Detect Video Objects button and the selected task (detection/segmentation) will start on the selected video.

Detection on RTSP

  • Select the RTSP stream button
  • Enter the rtsp url inside the textbox and hit Detect Objects button

Detection on YouTube Video URL

  • Select the source as YouTube
  • Copy paste the url inside the text box.
  • The detection/segmentation task will start on the YouTube video url
movobjdetyoutubeurl.mov

Acknowledgements

This app uses YOLOv8 for object detection algorithm and Streamlit library for the user interface.

Disclaimer

Please note this project is intended for educational purposes only and should not be used in production environments.

Hit star ⭐ if you like this repo!!!

yolov8-streamlit-detection-tracking's People

Contributors

codingmantras avatar ilyasdemir-demirilyas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

yolov8-streamlit-detection-tracking's Issues

Possibility to add a counter on the webcam object detection and tracker

Hi, this is great! The Yolo object detection and tracker on a webcam feed could be very useful on live microscopy camera video feeds.
Would it be however be possible to add a live counter on the live video feed? For example, if I only wanted to detect one class, on a webcam, could I add a counter to the video feed or in a separate description field below the video feed that counts the number of detected objects in real-time?
And if this is possible would I be able to generate and display a ratio between the number of two classes in real time?

Thank you!

video displaying

hi again sorry for bothering you

when choosing video from pc, it works but the detection is too slow, how to make the detection in real-time?
thank you

RTSP feed issue

Hi. The detection works fine for YouTube vids but when trying rtsp feed it freezes after about 2 sec. I am watching the feed simultaneously on another pc so I know the feed is up and running. Any ideas?

I am using a Mac M1.

CUDA ERROR

I try to run app with pytorch cuda, but crush :

Error loading video: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback] AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback] AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback] AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback] AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback] AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback] AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback] AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback] AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback] AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]

CONVERT TO .EXE

Hi, its possible to convert this aplication in a .exe ? could you help ?

i need to put over 3 wieghts

hi i have a project that i need to combine 3 yolov8 models using ensemble learning
and i tried to do it inside the code
but i failed , simply everything worked but not in real time, please can you help me

Next stage of the project (question before installation)

Hello. I was interested in your project, read 3 articles on the medium and now I’m planning to install a copy for myself.

I have a task: I need to track the number of people in a frame in real time. I must have at least 4 cameras on the screen. At the same time, I need to display an image from them and some statistics.

Will it be possible to expand the current webapp to display multiple cameras at the same time?
Can I write directly on the image the number of people in the frame?
Is it possible to write data somewhere for later processing?
For example, display a histogram with the number of people in the frame by hour. Or, for example, if there are less than 5 people in the frame, issue some kind of trigger.

I’m just starting to work with computer vision and streamlit in particular and I would like to understand whether the implementation of my tasks is possible in principle?

Problem with Youtube video

hi first thank you for your great work, i used your code its great but when choosing YouTube errors appear to me
the error
"
Error loading video: ERROR: Unable to extract uploader id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
"
i did use "pip install --upgrade youtube-dl"
and also "youtube-dl --verbose https://www.youtube.com/path...."

but error remains
thank you

About YOLOv8 weights

Hello Sir,
Im new to ML and trying to train a model(object detedtion) on Google Colab and make a streamlit web app of it in colab bcz my local machine has not the GPU.
My question is:
After training i need to load the model.Im confused which file should i load the best.pt or the yolov8s.pt?image
And after that i can make a function to upload and detect images with bounding boxes and labels on it.
I will be very grateful to you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.