Giter VIP home page Giter VIP logo

Comments (5)

github-actions avatar github-actions commented on July 4, 2024

👋 Hello @centurions, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 4, 2024

@centurions hello! It looks like the issue you're encountering during prediction with the ONNX model might be related to the output tensor dimensions expected by the post-processing function. The error suggests that the output tensor shape does not match the expected shape, which is causing the assertion error.

A potential solution is to verify the output shapes of your ONNX model to ensure they match what the YOLOv8 framework expects for classification tasks. You can use tools like Netron to visualize the ONNX model and check the output dimensions.

If the dimensions are indeed different, you might need to adjust the export settings or modify the post-processing code to handle the shape returned by your ONNX model. Here's a quick way to check the output shape using ONNX:

import onnx
import onnxruntime as ort

# Load your model
onnx_model = onnx.load("best.onnx")
onnx.checker.check_model(onnx_model)

# Create a session and get the output
ort_session = ort.InferenceSession("best.onnx")
outputs = ort_session.run(None, {'input': your_input_tensor})
print('Output shape:', outputs[0].shape)

Adjust your_input_tensor to match the input size your model expects. This snippet will help you confirm if the output dimensions are as expected.

If you continue to experience issues, please provide more details about the output shapes, and we can explore further adjustments!

from ultralytics.

centurions avatar centurions commented on July 4, 2024

Hello Thank you
I have same error with openvino and others. I couldn't solve it. Do you have any other idea how it can be solved?

from ultralytics.

glenn-jocher avatar glenn-jocher commented on July 4, 2024

Hello @centurions,

Thank you for your patience. The issue seems to be consistent across multiple export formats, indicating it might be related to the output tensor dimensions expected by the post-processing function.

To troubleshoot further, I recommend checking the output shapes of your exported models. You can use tools like Netron to visualize the model and ensure the output dimensions match what YOLOv8 expects for classification tasks.

Additionally, you can inspect the output shapes using the following code snippet for ONNX:

import onnx
import onnxruntime as ort

# Load your model
onnx_model = onnx.load("best.onnx")
onnx.checker.check_model(onnx_model)

# Create a session and get the output
ort_session = ort.InferenceSession("best.onnx")
outputs = ort_session.run(None, {'input': your_input_tensor})
print('Output shape:', outputs[0].shape)

Replace your_input_tensor with the appropriate input tensor for your model. This will help confirm if the output dimensions are as expected.

If the issue persists, please share the output shapes, and we can explore further adjustments. Feel free to reach out if you need more assistance! 😊

from ultralytics.

Burhan-Q avatar Burhan-Q commented on July 4, 2024

@centurions perhaps you could try downgrading your torch install? I've seen other users having some issues with torch==2.3.0 and I was able to successful export and run inference using some of the standard assets.

image

from ultralytics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.