Comments (6)
π Hello @fbarbe00, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Install
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
Environments
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
from ultralytics.
@fbarbe00 hi there,
Thank you for bringing this to our attention and providing a detailed report along with a reproducible code example. This is very helpful! π
It appears that the max_det
parameter is being retained across subsequent predictions even when it is not explicitly set. This behavior is indeed unexpected and could be indicative of a bug.
To help us investigate further, could you please confirm the following:
-
Are you using the latest versions of
torch
andultralytics
? If not, please upgrade to the latest versions and try running your code again:pip install --upgrade torch ultralytics
-
If the issue persists after upgrading, please provide any additional details that might help us reproduce the bug, such as the specific model architecture and any custom modifications you might have made.
In the meantime, as a workaround, you can reinitialize the model object before each prediction to ensure that the max_det
parameter does not carry over:
from ultralytics import YOLO
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"1. Loaded model, no max_det - Number of predictions: {len(result.boxes.conf)}") # 32
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01, max_det=1)[0]
print(f"2. Reinitialized model, max_det=1 - Number of predictions: {len(result.boxes.conf)}") # 1
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"3. Reinitialized model, no max_det - Number of predictions: {len(result.boxes.conf)}") # 32
This should ensure that each prediction is independent of the previous ones.
Please let us know if this resolves the issue or if you need further assistance. We appreciate your patience and cooperation as we work to improve our models.
from ultralytics.
Hey! Thanks for your reply. As I wrote in the environment section, this was with Ultralytics v8.2.31 and torch-2.2.2+cu121
I have updated both ultralytics and torch, and can confirm that the issue still persists
from ultralytics.
Hi @fbarbe00,
Thank you for the update and for confirming that you're using the latest versions of ultralytics
and torch
. We appreciate your diligence in testing this.
Given that the issue persists, it seems there might be a bug with how the max_det
parameter is being retained across predictions. To ensure we can investigate this thoroughly, could you please provide a minimal reproducible example that demonstrates the issue? This will help us reproduce the bug on our end and work towards a solution. You can find guidelines for creating a minimal reproducible example here.
In the meantime, as a workaround, you can reinitialize the model object before each prediction to ensure that the max_det
parameter does not carry over:
from ultralytics import YOLO
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"1. Loaded model, no max_det - Number of predictions: {len(result.boxes.conf)}") # 32
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01, max_det=1)[0]
print(f"2. Reinitialized model, max_det=1 - Number of predictions: {len(result.boxes.conf)}") # 1
model = YOLO(f"../{CURRENT_MODEL}.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"3. Reinitialized model, no max_det - Number of predictions: {len(result.boxes.conf)}") # 32
This should ensure that each prediction is independent of the previous ones.
Thank you for your patience and cooperation. We're here to help, so please let us know if you need any further assistance!
from ultralytics.
Hi,
Thank you for the effort of reading and answering most issues.
However, have you guys actually read my initial issue? I had already provided both the version and code to replicate the issue. I actually also already provided the workaround you suggested.
Here's an even more minimal version of the code, that you can run directly:
import torch
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
image = torch.rand(1, 3, 640, 640)
result = model(image, iou=0.9, conf=0.01)[0]
print(f"1. Loaded model, no max_det - Number of predictions: {len(result.boxes.conf)}")
result = model(image, iou=0.9, conf=0.01, max_det=1)[0]
print(f"2. Same model obj, max_det=1 - Number of predictions: {len(result.boxes.conf)}")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"3. Same model obj, no max_det - Number of predictions: {len(result.boxes.conf)}")
model = YOLO("yolov8n.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"4. Loaded model, no max_det - Number of predictions: {len(result.boxes.conf)}")
Note that since the image is random, it might not always return more than one box (but usually does, since the conf is so low)
from ultralytics.
Hi @fbarbe00,
Thank you for your detailed follow-up and for providing a more minimal code example. We appreciate your effort in helping us understand and reproduce the issue. π
I have reviewed your code and can confirm that the behavior you're experiencing with the max_det
parameter being retained across predictions is indeed unexpected. This looks like a bug that needs further investigation.
Hereβs a concise summary of the issue:
- Setting
max_det=1
limits the number of predictions to one. - Subsequent predictions without
max_det
still return only one prediction until the model is reloaded.
Your minimal reproducible example is very helpful. We will investigate this behavior further to identify the root cause and work on a fix.
In the meantime, as a workaround, reinitializing the model before each prediction, as you mentioned, ensures that the max_det
parameter does not carry over. Hereβs a quick reminder of that approach:
from ultralytics import YOLO
import torch
model = YOLO("yolov8n.pt")
image = torch.rand(1, 3, 640, 640)
# Initial prediction without max_det
result = model(image, iou=0.9, conf=0.01)[0]
print(f"1. Loaded model, no max_det - Number of predictions: {len(result.boxes.conf)}")
# Prediction with max_det=1
result = model(image, iou=0.9, conf=0.01, max_det=1)[0]
print(f"2. Same model obj, max_det=1 - Number of predictions: {len(result.boxes.conf)}")
# Reinitialize model to reset parameters
model = YOLO("yolov8n.pt")
result = model(image, iou=0.9, conf=0.01)[0]
print(f"3. Reinitialized model, no max_det - Number of predictions: {len(result.boxes.conf)}")
We appreciate your patience and understanding as we work to resolve this issue. If you have any further questions or additional details to share, please feel free to let us know.
from ultralytics.
Related Issues (20)
- ai_gym list assignment index out of range HOT 2
- Hub classification custom dataset upload problems HOT 3
- TensorRT discrepancy between version 10 and 8.6.1(2?) HOT 2
- Keypoints Detection HOT 2
- Filter out other classes/detection in YOLOV8 OBB HOT 2
- Segmentation Result: Distant Thin Lane Are Jagged and Discontinuous HOT 2
- AttributeError: Can't get attribute 'v10DetectLoss' on <module 'ultralytics.utils.loss'> HOT 1
- I want to retrain the yolov8m model for my new classes, but when using resume=True an error appears HOT 8
- How can I perform a training with two different machines? HOT 1
- YOLOv8 bounding box detection HOT 1
- Read data in sequence HOT 1
- UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance. grad.sizes() = [2, 64, 1, 1], strides() = [64, 1, 64, 64] bucket_view.sizes() = [2, 64, 1, 1], strides() = [64, 1, 1, 1] (Triggered internally at ../torchrc/distributed/c10d/reducer.cpp:325.) Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass HOT 1
- Using Unity Sentis for Segmentation Model Only for Human HOT 2
- yolov8 for web-camera use to classification HOT 2
- Weights combining HOT 1
- I am getting error! Help Me to Fix it i am confused HOT 1
- export yolov8 format
- RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by [rank0]: making sure all `forward` function outputs participate in calculating loss. HOT 1
- can not find the data correctly when use DDP train HOT 2
- Hybrid agnostic NMS HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ultralytics.