Giter VIP home page Giter VIP logo

Comments (40)

github-actions avatar github-actions commented on September 26, 2024

πŸ‘‹ Hello @Nixson-Okila, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello! Modifying the YOLOv8-OBB model to output polygonal bounding boxes (PBB) with four corners instead of the standard oriented bounding boxes (OBB) involves a few changes to the model's architecture and post-processing steps.

Here’s a brief guide on how to approach this:

  1. Model Output Modification: You'll need to adjust the model's head to output eight values (x1, y1, x2, y2, x3, y3, x4, y4) representing the coordinates of the four corners of the bounding box. This can be done in the model definition file (typically a .yaml file).

  2. Post-Processing Changes: Modify the post-processing code to handle these eight output values correctly. This involves adjusting the code that interprets the model outputs to create bounding boxes from these coordinates.

  3. Loss Function Adjustment: Ensure that the loss function used during training can handle the difference in bounding box representation. You might need to customize it to calculate the loss based on the distances between the predicted corners and the actual corners.

Here's a simple example of what changes in the model definition might look like (assuming you're familiar with the structure of the .yaml files used to configure YOLOv8 models):

# Assuming 'head' is the section of the model where outputs are defined
head:
  - [Conv, [256, 3, 1]]
  - [Conv, [8, 1, 1]]  # Outputting 8 values (x1, y1, x2, y2, x3, y3, x4, y4)

Remember, these changes require a good understanding of both the model architecture and the training process. Testing and validation are crucial to ensure that the model performs as expected with the new bounding box format.

If you need more detailed guidance, feel free to ask! 😊

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

@Nixson-Okila hello!

Happy to help with the postprocess() function modification. Here's a concise example to handle the eight output values for the polygonal bounding boxes:

def postprocess(self, preds, img, orig_imgs):
    # Assuming preds are the raw model outputs
    results = []
    for pred, orig_img in zip(preds, orig_imgs):
        # Convert model outputs to polygon coordinates
        polygons = pred[:, :8].reshape(-1, 4, 2)  # Reshape to (num_boxes, 4 points, 2 coords)
        polygons = ops.scale_boxes(img.shape[2:], polygons, orig_img.shape[:2])
        
        # Create Results object
        results.append(Results(orig_img, polygons=polygons))
    return results

This snippet assumes that your model outputs the coordinates in a flat format and that you have a utility function scale_boxes to adjust the coordinates to the original image size. Make sure to adapt it to fit the exact output format and utility functions available in your setup.

Let me know if you need further assistance! 😊

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Great to hear that, @Nixson-Okila! If you run into any snags or have more questions as you implement the changes, don't hesitate to reach out. Happy coding! 😊

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hi @Nixson-Okila,

Thanks for your feedback! 😊

For the scale_boxes() function, you can modify it to handle the xyxyxyxy format by scaling each coordinate pair individually. Here’s a quick example:

def scale_boxes(img1_shape, boxes, img0_shape):
    gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])
    pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding
    boxes[:, [0, 2, 4, 6]] -= pad[0]  # x padding
    boxes[:, [1, 3, 5, 7]] -= pad[1]  # y padding
    boxes[:, :8] /= gain
    return boxes

Regarding your inquiry, modifying the rectangular bounding box detection to polygonal bounding detection can indeed be simpler in some cases. This is because rectangular bounding boxes are axis-aligned and easier to manipulate. However, the choice depends on your specific use case and the nature of the objects you are detecting.

Feel free to reach out if you have more questions or need further assistance!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

You're welcome! Feel free to reach out if you encounter any issues or need further assistance. We're here to help! 😊

Best of luck with your modifications!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello,

Thank you for reaching out. It seems unusual that the training does not start without any error messages. Here are a few steps you can take to diagnose and potentially resolve the issue:

  1. Check Python Environment: Ensure that your Python environment is set up correctly and all dependencies are installed. Sometimes, missing libraries can cause silent failures.

  2. Verify Paths: Double-check the paths you've provided in the command to ensure they are correct and accessible. This includes path/to/train.py, path/to/my_dataset.yaml, and path/to/default.yaml.

  3. Console Output: Run the command directly in a terminal (outside of any notebooks if you're using one) to see if there are any output messages that might not be showing up in your current environment.

  4. Logs: Check if there are any logs generated in the directory where you are running the command. They might contain clues about what's going wrong.

  5. Minimal Configuration: Try running the training with a minimal configuration using a well-known dataset like COCO128 to rule out any issues with your custom dataset or configuration.

If these steps do not resolve the issue, please provide any additional information about changes you made to the configuration or other relevant details, and we'll be glad to assist further!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

@Nixson-Okila you're welcome! If you encounter any further issues or have questions as you proceed, don't hesitate to reach out. We're here to help! 😊

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @Nixson-Okila,

Thank you for reaching out! Modifying the plot_images() function to plot polygonal bounding boxes is a great idea. Let's work through this together.

First, ensure you are using the latest versions of torch and ultralytics. You can upgrade them using:

pip install --upgrade torch ultralytics

Next, to modify plot_images() for polygonal bounding boxes, you can follow these steps:

  1. Locate the plot_images() function: This function is typically found in plotting.py.

  2. Modify the function to handle polygonal coordinates: You will need to adjust the plotting logic to draw polygons instead of rectangles. Here’s a basic example to get you started:

import matplotlib.pyplot as plt
import matplotlib.patches as patches

def plot_images(images, bboxes, save_path=None):
    fig, ax = plt.subplots(1)
    ax.imshow(images)

    for bbox in bboxes:
        # Assuming bbox is in the format [x1, y1, x2, y2, x3, y3, x4, y4]
        polygon = patches.Polygon([(bbox[0], bbox[1]), (bbox[2], bbox[3]), (bbox[4], bbox[5]), (bbox[6], bbox[7])],
                                  closed=True, edgecolor='r', facecolor='none')
        ax.add_patch(polygon)

    if save_path:
        plt.savefig(save_path)
    plt.show()
  1. Integrate this logic into plot_images(): Replace the existing rectangle plotting logic with the above polygon plotting logic.

If you encounter any issues or need further assistance, please provide a minimum reproducible code example. This will help us better understand the context and provide more accurate support. You can find more details on how to create a minimum reproducible example here.

Feel free to reach out if you have any more questions or need further assistance. We're here to help! 😊

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

@Nixson-Okila you're welcome! I'm glad you found the information helpful. 😊

If you encounter any issues or need further assistance, please provide a minimum reproducible code example. This will help us better understand the context and provide more accurate support. You can find more details on how to create a minimum reproducible example here.

Additionally, please ensure you are using the latest versions of torch and ultralytics. You can upgrade them using:

pip install --upgrade torch ultralytics

Feel free to reach out if you have any more questions or need further assistance. We're here to help!

from ultralytics.

russel0719 avatar russel0719 commented on September 26, 2024

Hello! Modifying the YOLOv8-OBB model to output polygonal bounding boxes (PBB) with four corners instead of the standard oriented bounding boxes (OBB) involves a few changes to the model's architecture and post-processing steps.

Here’s a brief guide on how to approach this:

  1. Model Output Modification: You'll need to adjust the model's head to output eight values (x1, y1, x2, y2, x3, y3, x4, y4) representing the coordinates of the four corners of the bounding box. This can be done in the model definition file (typically a .yaml file).
  2. Post-Processing Changes: Modify the post-processing code to handle these eight output values correctly. This involves adjusting the code that interprets the model outputs to create bounding boxes from these coordinates.
  3. Loss Function Adjustment: Ensure that the loss function used during training can handle the difference in bounding box representation. You might need to customize it to calculate the loss based on the distances between the predicted corners and the actual corners.

Here's a simple example of what changes in the model definition might look like (assuming you're familiar with the structure of the .yaml files used to configure YOLOv8 models):

# Assuming 'head' is the section of the model where outputs are defined
head:
  - [Conv, [256, 3, 1]]
  - [Conv, [8, 1, 1]]  # Outputting 8 values (x1, y1, x2, y2, x3, y3, x4, y4)

Remember, these changes require a good understanding of both the model architecture and the training process. Testing and validation are crucial to ensure that the model performs as expected with the new bounding box format.

If you need more detailed guidance, feel free to ask! 😊

@glenn-jocher Hi, Thank you for your great work.

I want to do the same thing as the author of this issue, but there are some parts of your answer that I didn't understand.
You mentioned modifying the model head according to the example YAML format, but after looking at the yolov8-obb.yaml file, I'm not sure how to edit the head. Could you explain it in detail?

Thanks

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @russel0719,

Thank you for your kind words and for reaching out! 😊

To modify the YOLOv8-OBB model to output polygonal bounding boxes (PBB) with four corners, you'll need to make changes to the model's architecture, specifically the head, and adjust the post-processing steps. Let's dive into more detail:

1. Model Output Modification

In the yolov8-obb.yaml file, you'll need to adjust the head to output eight values (x1, y1, x2, y2, x3, y3, x4, y4). Here's a more detailed example:

# yolov8-obb.yaml

head:
  - type: Conv
    args: [256, 3, 1]
  - type: Conv
    args: [8, 1, 1]  # Outputting 8 values for the polygonal bounding box

2. Post-Processing Changes

You'll need to modify the post-processing code to handle these eight output values correctly. This involves adjusting the code that interprets the model outputs to create bounding boxes from these coordinates. Here’s a basic example of how you might adjust the post-processing function:

import matplotlib.pyplot as plt
import matplotlib.patches as patches

def plot_images(images, bboxes, save_path=None):
    fig, ax = plt.subplots(1)
    ax.imshow(images)

    for bbox in bboxes:
        # Assuming bbox is in the format [x1, y1, x2, y2, x3, y3, x4, y4]
        polygon = patches.Polygon([(bbox[0], bbox[1]), (bbox[2], bbox[3]), (bbox[4], bbox[5]), (bbox[6], bbox[7])],
                                  closed=True, edgecolor='r', facecolor='none')
        ax.add_patch(polygon)

    if save_path:
        plt.savefig(save_path)
    plt.show()

3. Loss Function Adjustment

Ensure that the loss function used during training can handle the difference in bounding box representation. You might need to customize it to calculate the loss based on the distances between the predicted corners and the actual corners.

Additional Steps

  • Testing and Validation: After making these changes, thoroughly test and validate your model to ensure it performs as expected with the new bounding box format.
  • Reproducible Example: If you encounter any issues, providing a minimum reproducible example can greatly help in diagnosing the problem. You can find more details on how to create one here.

If you need further assistance or have specific questions about any part of the process, feel free to ask! We're here to help. 😊

from ultralytics.

russel0719 avatar russel0719 commented on September 26, 2024

@glenn-jocher

Thank you for your reply! I'll try it.

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @russel0719,

You're welcome! I'm glad to hear that you're going to give it a try. If you encounter any issues or have further questions as you proceed, please don't hesitate to reach out.

For any complex issues, providing a minimum reproducible example can greatly help us diagnose and resolve the problem more efficiently. You can find more details on how to create one here.

Also, please ensure that you are using the latest versions of torch and ultralytics to avoid any compatibility issues. You can upgrade them using:

pip install --upgrade torch ultralytics

Feel free to share your progress or any specific challenges you face. We're here to help! 😊

Best of luck with your modifications!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @Nixson-Okila,

Thank you for your detailed explanation of the modifications you're working on. It sounds like you're making significant changes to adapt the model for polygonal bounding boxes. Let's address the issue you're encountering.

Issue Explanation

The error you're seeing, expanded size of the tensor (9) must match the existing size (5) at non-singleton dimension 1, indicates a mismatch in tensor dimensions during the assignment operation. This typically happens when the target tensor's shape does not align with the expected shape in the code.

Steps to Resolve

  1. Adjust the preprocess Method:
    Ensure that the preprocess method correctly handles the new bounding box format. You need to update the tensor operations to accommodate the new size.

    def preprocess(self, targets, batch_size, scale_tensor):
        # Adjust the scale_tensor to match the new bounding box format
        scale_tensor = torch.tensor([1, 0, 1, 0, 1, 0, 1, 0], device=targets.device)
        # Process targets to match the new format
        # Ensure targets are in the shape [batch_size, 10] where 10 includes 8 coordinates + 2 additional values
        # Your custom processing logic here
  2. Update the __call__ Method:
    Modify the __call__ method to handle the new shape of batch["bboxes"].

    def __call__(self, preds, batch):
        # Ensure batch["bboxes"] is in the shape [n, 8]
        targets = batch["bboxes"]
        if targets.shape[1] != 8:
            raise ValueError(f"Expected targets shape [n, 8], but got {targets.shape}")
        # Your custom processing logic here
  3. Ensure Consistency Across the Codebase:
    Make sure all parts of the code that interact with the bounding boxes are updated to handle the new format. This includes data loading, augmentation, and any other preprocessing steps.

Example Code Adjustment

Here’s a snippet to illustrate the changes:

class v8DetectionLoss(nn.Module):
    def __init__(self, ...):
        super(v8DetectionLoss, self).__init__()
        # Initialization code

    def preprocess(self, targets, batch_size, scale_tensor):
        scale_tensor = torch.tensor([1, 0, 1, 0, 1, 0, 1, 0], device=targets.device)
        # Adjust targets to match the new format
        # Your custom logic here

    def __call__(self, preds, batch):
        targets = batch["bboxes"]
        if targets.shape[1] != 8:
            raise ValueError(f"Expected targets shape [n, 8], but got {targets.shape}")
        # Your custom processing logic here

Additional Resources

For complex issues, providing a minimum reproducible example can greatly help us diagnose and resolve the problem more efficiently. You can find more details on how to create one here.

Also, please ensure that you are using the latest versions of torch and ultralytics to avoid any compatibility issues. You can upgrade them using:

pip install --upgrade torch ultralytics

Feel free to share your progress or any specific challenges you face. We're here to help! 😊

Best of luck with your modifications!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

You're welcome, @Nixson-Okila! 😊

It sounds like you're making great progress. To address the tensor size mismatch error, ensure that all parts of your code are updated to handle the new bounding box format. Specifically, make sure that the preprocess and __call__ methods in v8DetectionLoss are correctly adjusted to accommodate the new tensor dimensions.

Here's a quick recap of the key points:

  1. Adjust the preprocess Method:
    Ensure the preprocess method correctly handles the new bounding box format:

    def preprocess(self, targets, batch_size, scale_tensor):
        scale_tensor = torch.tensor([1, 0, 1, 0, 1, 0, 1, 0], device=targets.device)
        # Adjust targets to match the new format
        # Your custom logic here
  2. Update the __call__ Method:
    Modify the __call__ method to handle the new shape of batch["bboxes"]:

    def __call__(self, preds, batch):
        targets = batch["bboxes"]
        if targets.shape[1] != 8:
            raise ValueError(f"Expected targets shape [n, 8], but got {targets.shape}")
        # Your custom processing logic here
  3. Ensure Consistency Across the Codebase:
    Make sure all parts of the code that interact with the bounding boxes are updated to handle the new format, including data loading, augmentation, and any other preprocessing steps.

If the issue persists, providing a minimum reproducible example can greatly help us diagnose and resolve the problem more efficiently. You can find more details on how to create one here.

Also, please ensure that you are using the latest versions of torch and ultralytics to avoid any compatibility issues. You can upgrade them using:

pip install --upgrade torch ultralytics

Feel free to share your progress or any specific challenges you face. We're here to help! 😊

Best of luck with your modifications!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @Nixson-Okila,

Thank you for your patience and detailed follow-up. It sounds like you're encountering a challenging issue with the tensor dimensions. Let's address your question about the function that supplies the parameters to the __call__() method in loss.py.

Identifying the Source of Parameters

The __call__() method in the v8DetectionLoss class typically receives its parameters from the training loop. Specifically, the batch dictionary, which includes batch["bboxes"], is prepared during the data loading and preprocessing stages. Here’s a step-by-step guide to help you trace and modify the relevant parts:

  1. Data Loader:
    Ensure that your data loader is correctly preparing the bounding boxes in the new format. This involves modifying the dataset class to output bounding boxes with the shape [n, 8].

    class CustomDataset(Dataset):
        def __getitem__(self, index):
            # Load image and bounding boxes
            image, bboxes = load_data(index)
            # Ensure bboxes are in the shape [n, 8]
            return image, bboxes
  2. Training Loop:
    The training loop typically calls the loss function. Ensure that the batch dictionary is correctly populated with the new bounding box format.

    for batch in dataloader:
        images, bboxes = batch
        batch = {"bboxes": bboxes}
        loss = loss_fn(preds, batch)
  3. Loss Function:
    Ensure that the preprocess and __call__ methods in v8DetectionLoss are correctly handling the new format.

    class v8DetectionLoss(nn.Module):
        def preprocess(self, targets, batch_size, scale_tensor):
            scale_tensor = torch.tensor([1, 0, 1, 0, 1, 0, 1, 0], device=targets.device)
            # Adjust targets to match the new format
            # Your custom logic here
    
        def __call__(self, preds, batch):
            targets = batch["bboxes"]
            if targets.shape[1] != 8:
                raise ValueError(f"Expected targets shape [n, 8], but got {targets.shape}")
            # Your custom processing logic here

Debugging Tips

  • Print Shapes: Add print statements to verify the shapes of tensors at different stages.

    print(f"Shape of targets: {targets.shape}")
  • Reproducible Example: If the issue persists, providing a minimum reproducible example can greatly help in diagnosing the problem. You can find more details on how to create one here.

  • Latest Versions: Ensure you are using the latest versions of torch and ultralytics to avoid compatibility issues. You can upgrade them using:

    pip install --upgrade torch ultralytics

Feel free to share your progress or any specific challenges you face. We're here to help! 😊

Best of luck with your modifications!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

You're welcome! 😊

If you encounter any further issues, please ensure that you are using the latest versions of torch and ultralytics to avoid compatibility problems. You can upgrade them using:

pip install --upgrade torch ultralytics

Additionally, if the problem persists, providing a minimum reproducible example can greatly help us diagnose and resolve the issue more efficiently. You can find more details on how to create one here.

Feel free to share your progress or any specific challenges you face. We're here to help! Best of luck with your modifications! πŸš€

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @Nixson-Okila,

Thank you for your detailed report. It sounds like the mosaic4(self, labels) function in augment.py is returning labels with inconsistent shapes, which is causing issues in loss.py.

To address this, you should ensure that the final_labels are consistently formatted to match the expected shape (1, 8, 1) for polygonal bounding boxes. Here’s a quick guide on how to approach this:

  1. Check Label Shapes: Ensure that all labels are converted to the correct shape before they are returned from the mosaic4 function.

    def mosaic4(self, labels):
        # Your existing code
        # Ensure all labels are in the shape (1, 8, 1)
        final_labels = [self.convert_to_polygonal(label) for label in labels]
        return final_labels
    
    def convert_to_polygonal(self, label):
        if label.shape[1] == 6:
            # Convert (1, 6, 1) to (1, 8, 1)
            label = self.expand_to_polygonal(label)
        return label
    
    def expand_to_polygonal(self, label):
        # Custom logic to expand label to (1, 8, 1)
        return expanded_label
  2. Debugging: Add print statements to verify the shapes of the labels at different stages.

    print(f"Shape of label before conversion: {label.shape}")
    label = self.convert_to_polygonal(label)
    print(f"Shape of label after conversion: {label.shape}")
  3. Ensure Consistency: Make sure that all parts of the code that interact with the labels are updated to handle the new format.

If the issue persists, providing a minimum reproducible example can greatly help us diagnose and resolve the problem more efficiently. You can find more details on how to create one here.

Feel free to share your progress or any specific challenges you face. We're here to help! 😊

Best of luck with your modifications! πŸš€

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

You're welcome, @Nixson-Okila! 😊

I'm glad to hear you're giving it a try. If you encounter any further issues, please ensure that you are using the latest versions of torch and ultralytics to avoid compatibility problems. You can upgrade them using:

pip install --upgrade torch ultralytics

If the problem persists, providing a minimum reproducible example can greatly help us diagnose and resolve the issue more efficiently. You can find more details on how to create one here.

Feel free to share your progress or any specific challenges you face. We're here to help! Best of luck with your modifications! πŸš€

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

Hello @Nixson-Okila,

The issue you're encountering with mismatched tensor shapes for cls and bboxes in the __call__ function of the Format class likely stems from inconsistencies in the data augmentation or preprocessing steps. Ensure that the number of class labels matches the number of bounding boxes for each image.

First, verify that your data augmentation functions, including mosaic4, correctly handle the transformation of both cls and bboxes tensors. Each augmentation step should maintain the correspondence between class labels and bounding boxes.

Next, check the data loading and preprocessing pipeline to ensure that the shapes of cls and bboxes are consistent before they are passed to the collate_fn(batch) function. This will help maintain the integrity of the data throughout the training process.

If the issue persists, please ensure you are using the latest versions of torch and ultralytics. You can upgrade them using:

pip install --upgrade torch ultralytics

Feel free to share any further details or specific challenges you face. We're here to help!

from ultralytics.

Nixson-Okila avatar Nixson-Okila commented on September 26, 2024

from ultralytics.

glenn-jocher avatar glenn-jocher commented on September 26, 2024

You're welcome! Please ensure that your data augmentation functions and preprocessing steps maintain the correspondence between class labels and bounding boxes. If the issue persists, verify with the latest versions of torch and ultralytics.

from ultralytics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.