From a model in ipython notebook to production ready API service in 5 minutes.
BentoML is a python library for packaging and deploying machine learning models. It does two things without changing your model training workflow:
-
Standardize how to package your ML model for production, including its preprocessing/feature-fetching code, dependencies and configurations.
-
Easily distribute your ML model as PyPI package, API Server(in a Docker Image) , command line tool or Spark/Flink UDF.
pip install bentoml
Verify installation:
bentoml --version
BentoML does not change your training workflow. Let's train a simple scikit-learn model as example:
from sklearn import svm
from sklearn import datasets
clf = svm.SVC(gamma='scale')
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
To package this model with BentoML, you will need to create a new BentoService by subclassing it, and provides artifacts and env definition for it:
%%writefile iris_classifier.py
from bentoml import BentoService, api, env, artifacts
from bentoml.artifact import PickleArtifact
from bentoml.handlers import DataframeHandler
@artifacts([PickleArtifact('model')])
@env(conda_dependencies=["scikit-learn"])
class IrisClassifier(BentoService):
@api(DataframeHandler)
def predict(self, df):
return self.artifacts.model.predict(df)
Now, to save your trained model for prodcution use, simply import your
BentoService class and pack
it with required artifacts:
from iris_classifier import IrisClassifier
svc = IrisClassifier.pack(model=clf)
svc.save('./saved_bento', version='v0.0.1') # Saving archive to ./saved_bento/IrisClassifier/v0.0.1/
That's it. Now you have created your first BentoArchive. It's a directory containing all the source code, data and configurations files required to run this model in production. There are a few ways you could use this archive:
For exposing your model as a HTTP API endpoint, you can simply use the bentoml serve
command:
bentoml serve --archive-path="./saved_bento/IrisClassifier/v0.0.1/"
Note: you must ensure the pip and conda dependencies are available in your python
environment when using bentoml serve
command. More commonly we recommand using
BentoML API server with Docker(see below).
You can build a Docker Image for running API server hosting your BentoML archive by using the archive folder as docker build context:
cd ./saved_bento/IrisClassifier/v0.0.1/
docker build -t myorg/iris-classifier .
Next, you can docker push
the image to your choice of registry for deployment,
or run it locally for development and testing:
docker run -p 5000:5000 myorg/iris-classifier
import bentoml
bento_svc = bentoml.load('./saved_bento/IrisClassifier/v0.0.1/')
bento_svc.predict(X[0])
BentoML also supports loading an archive from s3 location directly:
bento_svc = bentoml.load('s3://my-bento-svc/iris_classifier/')
First install your exported bentoml service with pip
:
pip install ./saved_bento/IrisClassifier/v0.0.1/
Now you can import it and used it as a python module:
from IrisClassifier import IrisClassifier
installed_svc = IrisClassifier()
installed_svc.predict(X[0])
Note that you could also publish your exported BentoService as a PyPI package as a public python package on pypi.org or upload to your organization's private PyPI index:
cd ./saved_bento/IrisClassifier/v0.0.1/
python setup.py sdist upload
When pip install
a BentoML archive, it also provides you with a CLI tool for
accsing your BentoService's apis from command line:
pip install ./saved_bento/IrisClassifier/v0.0.1/
IrisClassifier info
IrisClassifier predict --input='./test.csv'
Alternatively, you can also use the bentoml
cli to load and run a BentoArchive
directly:
bentoml info ./saved_bento/IrisClassifier/v0.0.1/
bentoml predict ./saved_bento/IrisClassifier/v0.0.1/ --input='./test.csv'
CLI access made it very easy to put your saved BentoArchive into an Airflow DAG, integrate your packaged ML model into testing environment or use it in combination with other shell tools.
All examples can be found in the BentoML/examples directory.
- Quick Start with sklearn
- Sentiment Analysis with Scikit-Learn
- Text Classification with Tensorflow Keras
- Fashion MNIST classification with Pytorch
- Fashion MNIST classification with Tensorflow Keras
- More examples coming soon!
We build BentoML because we think there should be a much simpler way for machine learning teams to ship models for production. They should not wait for engineering teams to re-implement their models for production environment or build complex feature pipelines for experimental models.
Our vision is to empower Machine Learning scientists to build and ship their own models end-to-end as production services, just like software engineers do. BentoML is enssentially this missing 'build tool' for Machine Learing projects.
With that in mind, here is the top design goals for BentoML:
-
Multiple framework support - BentoML should supports a wide range of ML frameworks out-of-the-box including Tensorflow, PyTorch, Scikit-Learn, xgboost and can be easily extended to work with new or custom frameworks.
-
Best Practice built-in - BentoML users can easily customize telemetrics and logging for their model, and make it easy to integrate with production systems.
-
Streamlines deployment workflows - BentoML supports deploying models into REST API endpoints with Docker, Kubernetes, AWS EC2, ECS, Google Cloud Platform, AWS SageMaker, and Azure ML.
-
Custom model runtime - Easily integrate your python code with high-performance model runtime backend(e.g. tf-serving, tensorrt-inference-server) in real-time model serving.
BentoML is under active development. Current version is a beta release, we may change APIs in future releases.
Want to help build BentoML? Check out our contributing documentation.
BentoML is GPL-3.0 licensed, as found in the COPYING file.