Giter VIP home page Giter VIP logo

forestflow's Introduction

ForestFlow

ForestFlow is a scalable policy-based cloud-native machine learning model server. ForestFlow strives to strike a balance between the flexibility it offers data scientists and the adoption of standards while reducing friction between Data Science, Engineering and Operations teams.

ForestFlow is policy-based because we believe automation for Machine Learning/Deep Learning operations is critical to scaling human resources. ForestFlow lends itself well to workflows based on automatic retraining, version control, A/B testing, Canary Model deployments, Shadow testing, automatic time or performance-based model deprecation and time or performance-based model routing in real-time.

Our aim with ForestFlow is to provide data scientists a simple means to deploy models to a production system with minimal friction accelerating the development to production value proposition.

To achieve these goals, ForestFlow looks to address the proliferation of model serving formats and standards for inference API specifications by adopting, what we believe, are currently, or are becoming widely adopted open source frameworks, formats, and API specifications. We do this in a pluggable format such that we can continue to evolve ForestFlow as the industry and space matures and we see a need for additional support.

Contents

Overview

Why ForestFlow?

Continuous deployment and lifecycle management of Machine Learning/Deep Learning models is currently widely accepted as a primary bottleneck for gaining value out of ML projects.

We first set out to find a solution to deploy our own models. The model server implementations we found were either proprietary, closed-source solutions or had too many limitations in what we wanted to achieve. The main concerns for creating ForestFlow can be summarized as:

  • We wanted to reduce friction between our data science, engineering and operations teams
  • We wanted to give data scientists the flexibility to use the tools they wanted (H2O, TensorFlow, Spark export to PFA etc..)
  • We wanted to automate certain lifecycle management aspects of model deployments like automatic performance or time-based routing and retirement of stale models
  • We wanted a model server that allows easy A/B testing, Shadow (listen only) deployments and and Canary deployments. This allows our Data Scientists to experiment with real production data without impacting production and using the same tooling they would when deployment to production.
  • We wanted something that was easy to deploy and scale for different deployment scenarios (on-prem local data center single instance, cluster of instances, Kubernetes managed, Cloud native etc..)
  • We wanted the ability to treat inference requests as a stream and log predictions as a stream. This allows us to test new models against a stream of older infer requests.
  • We wanted to avoid the "super-hero" data scientist that knows how to dockerize an application, apply the science, build an API and deploy to production. This does not scale well and is difficult to support and maintain.
  • Most of all, we wanted repeatability. We didn't want to re-invent the wheel once we had support for a specific framework.

Model Deployment

For model deployment, ForestFlow supports models described via MLfLow Model format which allows for different flavors i..e, frameworks & storage formats.

ForestFlow also supports a BASIC REST API for model deployment as well that mimics the MLflow Model format but does not require it.

Inference

For inference, we’ve adopted a similar approach. ForestFlow provides 2 interfaces for maximum flexibility; a BASIC REST API in addition to standardizing on the GraphPipe API specification.

Relying on standards, for example using GraphPipe’s specification means immediate availability of client libraries in a variety of languages that already support working with ForestFlow; see GraphPipe clients.

Please visit the quickstart guide to get a quick overview of setting up ForestFlow and an example on inference. Also please visit the Inference documentation for a deeper dive.

Currently Supported model formats

  • H2O - Mojo Model
  • TensorFlow & Keras - Planned
  • PFA - Planned
  • Spark ML Models and Pipelines via Aardpfark and PFA - Planned

Go to the Quick Start Guide to get started then dive a little deeper and learn about ForestFlow Concepts and how you can tailor it to fit your own use-cases.

Contributing

While ForestFlow has already delivered tremendous value for us in production, it's still in early phases of development as there are plenty of features we have planned and this continues to evolve at a rapid pace. We appreciate and consistently, make use of and, contribute open source projects back to the community. We realize the problems we're facing aren't unique to us so we welcome feedback, ideas and contributions from the community to help develop our roadmap and implementation of ForestFlow.

Check out Contribution Guide for more details on contributing to ForestFlow.

forestflow's People

Contributors

aalkilani avatar dependabot[bot] avatar gopikrishna967 avatar ibrahimhaddad avatar tastaples avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

forestflow's Issues

Model Explainability: Obtain SHAP values from MOJO model

In an effort to provide explanations for each prediction, in Forestflow, I would need to add .setWithDetailedPredictionCol(true), outlined here:

https://s3.amazonaws.com/h2o-release/sparkling-water/spark-2.3/3.26.5-2.3/doc/tutorials/shap_values.html#get-contributions-from-raw-mojo

"The call setWithDetailedPredictionCol(true) tells the service to create an additional prediction column with additional prediction details, such as the contributions. The name of this column is by default “detailed_prediction” and can be modified via setDetailedPredictionCol setter."

The addition would need to go around here in Forestflow:

Add ONNX flavor support

Currently ForestFlow only supports H2O flavors. This is extensible and we'd like to add support for ONNX-based models to cover a wider variety of frameworks that know how to export to ONNX.

This SHOULD look into runtime/platform considerations and provide switches for GPU support for example.

Add automatic inference from a Kafka topic

ForestFlow's core doesn't care what the source of inference requests is. It's built this way specifically to enable support for additional interfaces. As it is today, there's support for a basic HTTP REST API in addition to Graphpipe clients.
We'd like to add support for records coming from Kafka topics but lay this out in a way that we can easily extend support for other queuing systems like Faktory or Pulsar, or even an S3 or file-system watcher etc... This ticket will focus on Kafka first since it's already something ForestFlow integrates with for inference logging.

Add S3 SourceStorageProtocol support

ForestFlow already has stubs for supporting different protocols. See here: https://github.com/ForestFlow/ForestFlow/blob/master/core/src/main/scala/ai/forestflow/utils/SourceStorageProtocols.scala

and here for S3: https://github.com/ForestFlow/ForestFlow/blob/master/core/src/main/scala/ai/forestflow/utils/SourceStorageProtocols.scala#L87

It currently has implementations for Git and Local file systems. This ticket is to finish the implementation for S3. Sticking to the AKKA system, preferably using Alpakka for S3.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.