Comments (11)
👋 Hello @jiangxiaobaichunniang, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Install
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
Environments
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
from ultralytics.
@jiangxiaobaichunniang hi there!
Thank you for bringing this to our attention. Currently, the save_crop
function is not supported for the Oriented Bounding Box (OBB) task in YOLOv8. This feature is primarily designed for standard bounding boxes.
However, you can manually crop the oriented bounding regions using the coordinates provided by the OBB predictions. Here’s a quick example to get you started:
import cv2
from ultralytics import YOLO
# Load the model
model = YOLO("yolov8n-obb.pt")
# Run inference
results = model("path/to/your/image.jpg")
# Extract OBB coordinates and crop
for result in results:
for obb in result.obb.xyxyxyxy:
points = obb.cpu().numpy().reshape((-1, 1, 2)).astype(int)
mask = cv2.fillPoly(np.zeros_like(result.orig_img), [points], (255, 255, 255))
cropped_img = cv2.bitwise_and(result.orig_img, mask)
cv2.imwrite(f"cropped_{idx}.png", cropped_img)
This code snippet will help you crop the oriented bounding regions manually. If you encounter any issues or need further assistance, please let us know!
For more detailed guidance, you can visit our Object Cropping Guide.
from ultralytics.
Hi, glenn-jocher! Your method can solve the problem of obb not supporting save_crop. This effect is amazing!
In order for every user who has the same confusion as me to batch crop images, I have made improvements based on your foundation. But I hope the Ultratics team can solve this problem in YOLOv8 as soon as possible, so as to make the application for all users more convenient.
Thank you very much!
import cv2
from ultralytics import YOLO
import numpy as np
import os
Load the model
model = YOLO("yolov8n-obb.pt")
path = "path/to/your/folder"
img_list = os.listdir(path)
for file in img_list:
filename = os.path.splitext(file)[0]
data = os.path.join(path, file)
Run inference
results = model.predict(source=data, imgsz=640 )
# Extract OBB coordinates and crop
for result in results:
for idx, obb in enumerate(result.obb.xyxyxyxy):
idx += 1
points = obb.cpu().numpy().reshape((-1, 1, 2)).astype(int)
mask = cv2.fillPoly(np.zeros_like(result.orig_img), [points], (255, 255, 255))
cropped_img = cv2.bitwise_and(result.orig_img, mask)
cv2.imwrite(r'path/to/save//'+f"{filename}crop_{idx}.jpg", cropped_img)
from ultralytics.
Thank you for your positive feedback and for sharing your improved solution! We're thrilled to hear that the method provided has been helpful to you and that you've extended it to batch crop images. Your contribution is greatly appreciated and will certainly benefit other users facing similar challenges.
We understand the importance of having built-in support for cropping oriented bounding boxes (OBB) directly within YOLOv8. Our team is continuously working on enhancing the functionality and user experience of our models, and your feedback is invaluable in guiding these improvements.
In the meantime, your shared code is an excellent workaround for users needing this functionality. For those who might be new to this, here’s a slightly refined version of your code for clarity:
import cv2
from ultralytics import YOLO
import numpy as np
import os
# Load the model
model = YOLO("yolov8n-obb.pt")
input_path = "path/to/your/folder"
output_path = "path/to/save"
img_list = os.listdir(input_path)
for file in img_list:
filename = os.path.splitext(file)[0]
data = os.path.join(input_path, file)
# Run inference
results = model.predict(source=data, imgsz=640)
# Extract OBB coordinates and crop
for result in results:
for idx, obb in enumerate(result.obb.xyxyxyxy):
points = obb.cpu().numpy().reshape((-1, 1, 2)).astype(int)
mask = cv2.fillPoly(np.zeros_like(result.orig_img), [points], (255, 255, 255))
cropped_img = cv2.bitwise_and(result.orig_img, mask)
cv2.imwrite(os.path.join(output_path, f"{filename}_crop_{idx}.jpg"), cropped_img)
We encourage users to keep their packages up-to-date to benefit from the latest features and fixes. If you encounter any issues, please ensure you are using the most recent versions of torch
and ultralytics
.
For any further enhancements or issues, feel free to open a new issue or discussion. Your engagement helps us improve and serve the community better.
Thank you again for your contribution! 😊
from ultralytics.
Thank you very much for your further improvement. Thank you for your help!
from ultralytics.
Thank you for your kind words and for sharing your enhanced solution! We're delighted to hear that the method provided has been beneficial to you and that you've extended it to batch crop images. Your contribution is greatly appreciated and will undoubtedly help other users facing similar challenges.
If you encounter any further issues or have additional questions, please ensure you are using the latest versions of torch
and ultralytics
. Keeping your packages up-to-date can often resolve unexpected issues.
For any new bugs or issues, please provide a minimum reproducible code example as outlined in our documentation. This helps us investigate and address the problem more efficiently.
Thank you again for your engagement and contribution to the community! 😊
from ultralytics.
Hi, i made another version of the OBB cropping that instead of just blacking out the rest of the image, actually rotates and saves cropped smaller images in case that is what you want from this
import cv2
from ultralytics import YOLO
import numpy as np
import os
def crop_rect(img, rect):
# get the parameter of the small rectangle
center, size, angle = rect[0], rect[1], rect[2]
center, size = tuple(map(int, center)), tuple(map(int, size))
# get row and col num in img
height, width = img.shape[0], img.shape[1]
# calculate the rotation matrix
M = cv2.getRotationMatrix2D(center, angle, 1)
# rotate the original image
img_rot = cv2.warpAffine(img, M, (width, height))
# now rotated rectangle becomes vertical, and we crop it
img_crop = cv2.getRectSubPix(img_rot, size, center)
return img_crop, img_rot
if __name__ == '__main__':
# Load the model
model = YOLO("runs/obb/train18/weights/best.pt")
input_path = "predictme"
output_path = "predictme_crops"
img_list = os.listdir(input_path)
for file in img_list:
filename = os.path.splitext(file)[0]
data = os.path.join(input_path, file)
# Run inference
print("Predict a new image")
results = model.predict(source=data, imgsz=640)
# Extract OBB coordinates and crop
for result in results:
for idx, obb in enumerate(result.obb.xyxyxyxy):
points = obb.cpu().numpy().reshape((-1, 1, 2)).astype(int)
cnt = points
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
# print("bounding box: {}".format(box))
#cv2.drawContours(result.orig_img, [box], 0, (0, 0, 255), 2)
# img_crop will the cropped rectangle, img_rot is the rotated image
img_crop, img_rot = crop_rect(result.orig_img, rect)
#cv2.imwrite("cropped_img.jpg", img_crop)
cv2.imwrite(os.path.join(output_path, f"{filename}_crop_{idx}.jpg"), img_crop)
#mask = cv2.fillPoly(np.zeros_like(result.orig_img), [points], (255, 255, 255))
#cropped_img = cv2.bitwise_and(result.orig_img, mask)
cv2.waitKey(0)
from ultralytics.
Hi,quitmeyer! Thank you for your help. I have tried your method and it is an excellent one. But I found that there was local distortion in the cropped image.
from ultralytics.
Hi,quitmeyer! Thank you for your help. I have tried your method and it is an excellent one. But I found that there was local distortion in the cropped image.
how odd! i am not experiencing any distortion on my machine (windows 10 running python from VS code)
from ultralytics.
Hi @quitmeyer,
Thank you for your feedback and for trying out the method! I'm glad to hear that it was helpful. 😊
Regarding the local distortion in the cropped image, this issue might be related to the rotation and cropping process. To address this, you can try adjusting the interpolation method used during the rotation. The default interpolation method in cv2.warpAffine
is cv2.INTER_LINEAR
, which might cause some distortion. You can experiment with other interpolation methods like cv2.INTER_CUBIC
or cv2.INTER_LANCZOS4
for better results.
Here's an updated version of the crop_rect
function with an adjustable interpolation method:
import cv2
from ultralytics import YOLO
import numpy as np
import os
def crop_rect(img, rect, interpolation=cv2.INTER_CUBIC):
# get the parameter of the small rectangle
center, size, angle = rect[0], rect[1], rect[2]
center, size = tuple(map(int, center)), tuple(map(int, size))
# get row and col num in img
height, width = img.shape[0], img.shape[1]
# calculate the rotation matrix
M = cv2.getRotationMatrix2D(center, angle, 1)
# rotate the original image
img_rot = cv2.warpAffine(img, M, (width, height), flags=interpolation)
# now rotated rectangle becomes vertical, and we crop it
img_crop = cv2.getRectSubPix(img_rot, size, center)
return img_crop, img_rot
if __name__ == '__main__':
# Load the model
model = YOLO("runs/obb/train18/weights/best.pt")
input_path = "predictme"
output_path = "predictme_crops"
img_list = os.listdir(input_path)
for file in img_list:
filename = os.path.splitext(file)[0]
data = os.path.join(input_path, file)
# Run inference
print("Predict a new image")
results = model.predict(source=data, imgsz=640)
# Extract OBB coordinates and crop
for result in results:
for idx, obb in enumerate(result.obb.xyxyxyxy):
points = obb.cpu().numpy().reshape((-1, 1, 2)).astype(int)
cnt = points
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
# img_crop will the cropped rectangle, img_rot is the rotated image
img_crop, img_rot = crop_rect(result.orig_img, rect)
cv2.imwrite(os.path.join(output_path, f"{filename}_crop_{idx}.jpg"), img_crop)
cv2.waitKey(0)
Try this updated version and see if it resolves the distortion issue. If the problem persists, please provide more details about your environment and any specific conditions that might be affecting the results.
Thank you for your patience and collaboration! If you have any further questions or need additional assistance, feel free to ask.
from ultralytics.
cool! I myself never noticed any distortion? maybe that person above had a thing on the very edge of the image and cropping it there lead to distortion?
both inter and cubic and the other one look identical to me
Anyway, i incorporated your updated script in my script!
from ultralytics.
Related Issues (20)
- Why the reasoning speed of yolov8-seg is getting slower and slower? HOT 13
- How to use FASTSAM with camera HOT 3
- Cannot get bounding boxes but `show` can still display the detected objects HOT 2
- Oriented Bounding Boxes for Cross Detection HOT 7
- Training a model using ARM64 devices utilizes only one core HOT 15
- Add hardware support for ARM64 NPUs (Hailo8L or RK3855 NPU) HOT 1
- Deployment of training nodes in a Kuberentes Cluster HOT 5
- yolo_world HOT 2
- The problem of weight transfer in YOLOv8s backbone HOT 36
- Export - Ultralytics YOLOv8 model to TFJS HOT 3
- Application of SAHI in YOLOV8-OBB mission HOT 1
- Frame drop when increasing the number of streams HOT 4
- How to ReID a person and visualize his route across multiple cameras in live time HOT 2
- How to train YOLOV9 with this project? HOT 1
- Not displaying the RGB frame as soon as code runs and lagging when there is no object detected HOT 3
- Failed to train on AMD GPU (RCOM enabled and validated) HOT 3
- Convert YOLO models to Torchscript GPU Half Precision HOT 3
- Two questions about 'yolov8-rtdetr' HOT 1
- Error in TensorFlow Lite export for YOLOv8 model HOT 5
- Error occurred while running the code to generate COCO-test-dev2017 HOT 11
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ultralytics.