Features • Installation • Quick Start • Tutorial • Contributing • License
- Housekeeper provides a refined management for model (service) registration, deletion, update and selection.
- Converter is designed to convert models to serialized and optimized formats so that the models can be deployed to cloud.
- Profiler simulates the real service behavior by invoking a gRPC client and a model service, and provides a detailed report about model runtime performance (e.g. P99-latency and throughput) in production environment.
- Dispatcher launches a serving system to load a model in a containerized manner and dispatches the MLaaS to a device.
- Controller receives data from the monitor and node exporter, and controls the whole workflow of our system.
bash scripts/install.sh
Note
- Conda and Docker are required to run this installation script.
- To use TensorRT, you have to manually install TensorRT (
sudo
is required). See instruction here.
docker pull mlmodelci/mlmodelci
The system is still under active development. Welcome to join us!
MLModelCI provides a complete platform for managing, converting, profiling, and deploying models as cloud services (MLaaS). You just need to register your models to our platform and it will take over the rest tasks. To give a more clear start, we present the whole pipeline step by step as follows.
Assume you have a ResNet50 model trained by PyTorch. To deploy it as a cloud service, the first step is to publish the model to our system.
from modelci.hub.manager import register_model
from modelci.persistence.bo import IOShape
# Register a Trained ResNet50 Model to ModelHub.
register_model(
'home/ResNet50/pytorch/1.zip',
dataset='ImageNet',
acc=0.76,
task='image classification',
inputs=[IOShape([-1, 3, 224, 224], float)],
outputs=[IOShape([-1, 1000], float)],
convert=True,
profile=True
)
As the a newly trained model can not be deployed to cloud, MLModelCI converts it to some optimized formats (e.g., TorchScript and ONNX) automatically.
You can finish this on your own:
from modelci.hub.converter import ONNXConverter
from modelci.persistence.bo import IOShape
ONNXConverter.from_torch_module('<path to torch model>',
'<path to export onnx model>',
input_shape=[IOShape([-1, 3, 224, 224], float)],
batch_size=16)
Before deploying an optimized model as a cloud service, developers need to understand its runtime performance (e.g., latency and throughput) so to set up a more cost-effective solution (batch size? device? serving system? etc.). MLModelCI provides a profile to automate the processing.
You can manually profile your models as follows:
from modelci.hub.client.torch_client import CVTorchClient
from modelci.hub.profiler import Profiler
test_data_item = ...
batch_num = ...
batch_size = ...
mode_info = ...
# create a client
torch_client = CVTorchClient(test_data_item, batch_num, batch_size, asynchronous=False)
# init the profiler
profiler = Profiler(model_info=mode_info, server_name='name of your server', inspector=torch_client)
# start profiling model
profiler.diagnose()
MLModelCI provides a dispatcher to deploy a model as a cloud service. The dispatcher launches a serving system (e.g. Tensorflow-Serving) to load a model in a containerized manner and dispatches the MLaaS to a device.
We search for a converted model and then dispatch it to a device with a specific batch size.
from modelci.hub.deployer import serve
from modelci.hub.manager import retrieve_model_by_name
from modelci.persistence.bo import Framework, Engine
model_info = ...
# get saved model information
mode_info = retrieve_model_by_name(architecture_name='ResNet50', framework=Framework.PYTORCH, engine=Engine.TORCHSCRIPT)
# deploy the model to cuda device 0.
serve(save_path=model_info.saved_path, device='cuda:0', name='torchscript-serving', batch_size=16)
Now your model is an efficient cloud service!
For more information please take a look at our tutorials.
After the Quick Start, we provide detailed tutorials for users to understand our system.
- Register a Model in ModelHub
- Convert a Model to Optimized Formats
- Profile a Model for Cost-Effective MLaaS
- Dispatch a Model as a Cloud Service
- Manage Models with Housekeeper
MLModelCI welcomes your contributions! Please refer to here to get start.
Copyright 2020 Nanyang Technological University, Singapore
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.