Giter VIP home page Giter VIP logo

sayakpaul / dual-deployments-on-vertex-ai Goto Github PK

View Code? Open in Web Editor NEW
35.0 6.0 8.0 1.88 MB

Project demonstrating dual model deployment scenarios using Vertex AI (GCP).

Home Page: https://cloud.google.com/blog/topics/developers-practitioners/dual-deployments-vertex-ai

License: Apache License 2.0

Jupyter Notebook 94.49% Python 5.51%
vertex-ai mlops tfx kubeflow gcp automl tensorflow keras

dual-deployments-on-vertex-ai's Introduction

Dual-Deployments-on-Vertex-AI

By Chansung Park and Sayak Paul

This project demonstrates a workflow to cover dual model deployment scenarios using Kubeflow, TensorFlow Extended (TFX), and Vertex AI. We suggest reading the accompanying blog post first to get an idea and then following along with the code. This project also received the #TFCommunitySpotlight Award.

Motivation ๐Ÿ’ป

Let's say you want to allow your users to run an application both in online and offline mode. Your mobile application would use a TFLite model depending on the network bandwidth/battery etc., and if sufficient network coverage/internet bandwidth is available your application would instead use the online cloud one. This way your application stays resilient and can ensure high availability.

Sometimes we also do layered predictions where we first divide a problem into smaller tasks:

  1. predict if it's a yes/no,
  2. depending on the output of 1) we run the final model.

In these cases, 1) takes place on-device and 2) takes place on the cloud to ensure a smooth UX. Furthermore, it's a good practice to use a mobile-friendly network architecture (such as MobileNet) when considering mobile deployments. This leads us to the following question:

Can we train two different models within the same deployment pipeline and manage them seamlessly?

This project is motivated by this question.

AutoML, TFX, etc. ๐Ÿ› 

Different organizations have people with varied technical backgrounds. We wanted to provide the easiest solution first and then move on to something that is more customizable. To this end, we leverage Kubeflow's AutoML SDKs to build, train, and deploy models with different production use-cases. With AutoML, the developers can delegate a large part of their workflows to the SDKs and the codebase also stays comparatively smaller. The figure below depicts a sample system architecture for this scenario:

Figure developed by Chansung Park.

But the story does not end here. What if we wanted to have better control over the models to be built, trained, and deployed? Enter TFX! TFX provides the flexibility of writing custom components and including them inside a pipeline. This way Machine Learning Engineers can focus on building and training their favorite models and delegate a part of the heavy lifting to TFX and Vertex AI. On Vertex AI (acting as an orchestrator) this pipeline will look like so:

๐Ÿ”ฅ In this project we cover both these situations. 

Code ๐Ÿ†˜

Our code is distributed as Colab Notebooks. But one needs to have a billing-enabled GCP account (with a few APIs enabled) to successfully run these notebooks. Alternatively one can also use the notebooks on Vertex AI Notebooks. Find all the notebooks and their descriptions here: notebooks.

Additionally, you can find the custom TFX components separately here - custom_components.

Acknowledgements

ML-GDE program for providing GCP credits. Thanks to Karl Weinmeister and Robert Crowe for providing review feedback on this project.

dual-deployments-on-vertex-ai's People

Contributors

deep-diver avatar sayakpaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dual-deployments-on-vertex-ai's Issues

Issues with `tfx.extensions.google_cloud_ai_platform.Pusher`

@deep-diver

I tried tfx.extensions.google_cloud_ai_platform.Pusher with the following specs:

deploy_job_spec = {
    'project_id': project_id,
    'model_name': 'densenet_flowers',
    'deployed_model_display_name': 'densenet_flowers_' + TIMESTAMP,
    'traffic_split': {"0": 100},
    'machine_type': 'n1-standard-4',
    'accelerator_type': aip.AcceleratorType.NVIDIA_TESLA_K80.name,
    'accelerator_count': 1,
    'starting_replica_count': 1,
    'max_replica_count': 1
}

I tried it with the same training_job_specs too. But as per the logs, the request is becoming invalid saying the payload parameters are invalid. This is how, I used the GCP pusher:

# Pushes the model to Vertex AI.
densnet_pusher = tfx.extensions.google_cloud_ai_platform.Pusher(
        model=densenet_trainer.outputs['model'],
        custom_config={
            tfx.extensions.google_cloud_ai_platform.experimental.PUSHER_SERVING_ARGS_KEY:
                deploy_job_spec
        }
        ).with_id("densnet_pusher")

I am currently trying with custom components. But if you have any suggestions on the above, let me know.

Rename and add Colab link

@deep-diver could you change the name of the dual_deployments_with_AutoML.ipynb notebook (adjust casing basically) to match the naming of the rest of the notebooks? Also, please add a Colab Notebook link. You can directly commit your notebook from Colab and it'd provide you an option of whether to include a Colab link or not.

Something like the following:

image

Also, please add a small description at the beginning of the notebook to let the readers know what the notebook is about. You can refer to this notebook for an example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.