In this repository, we use Amazon SageMaker to build, train and deploy Faster-RCNN and RetinaNet models using Detectron2. Detectron2 is an open-source project released by Facebook AI Research and build on top of PyTorch deep learning framework. Detectron2 makes easy to build, train and deploy state of the art object detection algorithms. Moreover, Detecron2โs design makes easy to implement cutting-edge research projects without having to fork the entire codebase.Detectron2 also provides a Model Zoo which is a collection of pre-trained detection models we can use to accelerate our endeavour.
This repository shows how to do the following:
- Build Detectron2 Docker images and push them to Amazon ECR to run training and inference jobs on Amazon SageMaker.
- Register a dataset in Detectron2 catalog from annotations in augmented manifest files. Augmented manifest file is the output format of Amazon SageMaker Ground Truth annotation jobs.
- Run a SageMaker Training job to finetune pre-trained model weights on a custom dataset.
- Configure SageMaker Hyperparameter Optimization jobs to finetune hyper-parameters.
- Run a SageMaker Batch Transform job to predict bouding boxes in a large chunk of images.
Start by cloning this repository into your Amazon SageMaker notebook instance.
Open the notebook. Follow the instruction in the notebook and use conda_pytorch_p36
as kernel to execute code cells.
You will use a Detectron2 object detection model to recognize objects in densely packed scenes. You will use the SKU-110k dataset for this task. Be aware that the authors of the dataset provided it solely for academic and non-commercial purposes. Please refer to the following paper for further details on the dataset:
@inproceedings{goldman2019dense,
author = {Eran Goldman and Roei Herzig and Aviv Eisenschtat and Jacob Goldberger and Tal Hassner},
title = {Precise Detection in Densely Packed Scenes},
booktitle = {Proc. Conf. Comput. Vision Pattern Recognition (CVPR)},
year = {2019}
}
If you want details on the code used for training and prediction, please refer to code documentation in the respective source directories.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.