Giter VIP home page Giter VIP logo

spotz's Introduction

spotz Build Status Stories in Ready

Spark Parameter Optimization

Spotz is a hyperparameter optimization framework written in Scala designed to exploit Apache Spark to perform its distributed computation. A broad set of optimization algorithms have been implemented to solve for the hyperparameter values of an objective function that you specify.

Motivation

The eHarmony modeling team primarily uses Spark and Scala as the base of its machine learning pipeline. Given that Spark is our distributed compute engine of choice, we have need for a robust hyperparameter optimization framework that integrates well with Spark. There are already excellent frameworks out there like Hyperopt and Optunity, written in Python, but the ideal framework that runs in Scala on top of Spark does not exist. MLlib, though providing some support for Grid Search, is not a general framework for hyperparameter tuning and does not integrate with other learners. This project's purpose is to build a simple framework that developers can integrate with Spark to fulfill their hyperparameter optimization needs.

Vowpal Wabbit

At eHarmony, we make heavy use of Vowpal Wabbit. We use this learner so much that we feel strong integration with VW is very important. Considering that Vowpal Wabbit does not support hyperparameter optimization out of the box, we've taken steps to support it without losing generality.

Optimization Algorithms

Currently the following solvers have been implemented:

Maven dependency

To use this as part of a maven build

<dependency>
    <groupId>com.eharmony</groupId>
    <artifactId>spotz-core</artifactId>
    <version>1.0.0</version>
<dependency>

Usage

Using this framework consists of writing the following boilerplate code:

  1. Import the default definitions from the spotz Preamble object. Importing from a library Preamble is a Scala convention to bring in default definitions into the current scope.
  2. Define the objective function.
  3. Define the space of hyperparameter values that you wish to search.
  4. Select the solver.

Imports

Import the default definitions from the spotz preamble object

import com.eharmony.spotz.Preamble._

Objective Function Trait

Define your objective function by implementing the Objective[P, L] trait.

import com.eharmony.spotz.objective.Objective

trait Objective[P, L]  {
  def apply(point: P): L
}

Note that the objective function trait is type parameterized [P, L] for the point and the loss. This function must simply implement the apply(point: P): L method of that trait. The point type parameter is an abstract representation of the current hyperparameter values and is passed into the trait through the apply method. The loss is the value returned from evaluating the objective function on that point. The framework default implementation provides a Point class for the P type parameter within the Preamble object and uses Double as the loss value. Again, importing the default definitions within the Preamble object is important for this to work.

The Branin-Hoo function is shown here as a simple example.

import com.eharmony.spotz.Preamble._
import com.eharmony.spotz.objective.Objective

class BraninObjective extends Objective[Point, Double] {
  val a = 1
  val b = 5.1 / (4 * pow(Pi, 2))
  val c = 5 / Pi
  val r = 6
  val s = 10
  val t = 1 / (8 * Pi)

 /**
   *  Input Domain:
   *  This function is usually evaluated on
   *  x1 ∈ [-5, 10], x2 ∈ [0, 15].
   *
   *  Global Minimum:
   *  f(x*) = 0.397887 at x* = (-Pi, 12.275), (Pi, 2.275), (9.42478, 2.475)
   *
   * @param point
   * @return a Double which is the result of evaluating the Branin function
   */
  override def apply(point: Point): Double = {
    val x1 = point.get[Double]("x1")
    val x2 = point.get[Double]("x2")

    a * pow(x2 - b*pow(x1, 2) + c*x1 - r, 2) + s*(1-t)*cos(x1) + s
  }
}

Hyperparameter Space

Define the space of hyperparameter values that you desire to search. This space is defined differently depending on the chosen optimizer.

For random search, the space is defined by a Map where the key is a string label and the value is a RandomSampler trait. There are several defined classes that implement the RandomSampler trait. For a complete list of available RandomSampler functions, refer to the documentation.

val space = Map(
  ("x0", UniformDouble(0, 1)),
  ("x1", RandomChoice("foo", "bar"))
)

For grid search, the space is defined by a Map where is the key is a string label and the value is an Iterable[T].

val space = Map(
  ("x0", Range.Double(0, 1, 0.01)),
  ("x1", Seq("foo", "bar"))
)

Choose Solver

Select the algorithm of your choice to perform the optimization. Some algorithms may require defining a stopping strategy. This states when you'd like the solver to stop searching the defined hyperparameter space for the best hyperparameter values.

val stopStrategy = StopStrategy.stopAfterMaxTrials(maxTrials)
val optimizer = new SparkRandomSearch[Point, Double](sparkContext, stopStrategy)

Stop Strategies

Currently, there are few ways to specify stopping criteria:

  • Stopping after maximum time duration:
StopStrategy.stopAfterMaxDuration(maxDuration)
  • Stopping after maximum number of trials:
StopStrategy.stopAfterMaxTrials(maxTrials)
  • Stopping after a maximum number of trials or a maximum time duration:
StopStrategy.stopAfterMaxTrialsOrMaxDuration(maxTrials, maxDuration)

Full Example

Wiring it all together and using the Branin objective function defined above, here is all the necessary boilerplate to make your example work.

import com.eharmony.spotz.Preamble._
import com.eharmony.spotz.optimizer.StopStrategy
import com.eharmony.spotz.optimizer.random.SparkRandomSearch
import com.eharmony.spotz.optimizer.hyperparam.UniformDouble
import com.eharmony.spotz.examples.BraninObjective
import org.apache.spark.{SparkConf, SparkContext}

val sc = new SparkContext(new SparkConf().setAppName("Branin Function Trials"))
val space = Map(
  ("x1", new UniformDouble(-5, 10)),
  ("x2", new UniformDouble(0, 15))
)
val stopStrategy = StopStrategy.stopAfterMaxTrials(100000)
val optimizer = new SparkRandomSearch[Point, Double](sc, stopStrategy)
val result = optimizer.minimize(new BraninObjective, space)
sc.stop()

spotz's People

Contributors

jmorra avatar vsuthichai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spotz's Issues

Adaptive batch sizing

Tune the batch size adaptively such that the user does not need to specify it. The batch size becomes important when the caller desires the optimizer to finish within some maximum duration. Too large a batch size will delay duration checks while processing occurs on the cluster. Too small a batch size will cause frequent return trips back to the driver which incur some constant time overhead.

Ability for a sampler to depend on previous parameter's values

Hi,

We're interested in Spotz, but we have a need for a param to depend on previously-sampled ones (i.e. if mode is ON, vary between 0 to 5, otherwise 5 to 10).

The change would probably need to happen

factory(params.map { case (label, sampler) => (label, sampler(rng)) } )
where instead of a map, we'd fold the params list by passing the previously-calculated params along with the (current argument) rng.

This would be a breaking change as it is (since all samplers need their apply functions to receive an extra parameter), but maybe there's a way to make it smoother?
We'd provide a PR if that's acceptable.

I'm interested in your suggestions, if there are others&simpler ways. Thanks!

Refactor VW cache distribution

There's a slowdown with VW cache distribution during at the beginning of the Spark job. Refactor this logic to zip, and distribute the vw dataset to the executors before VW cache generation begins

Dataset loader for k-fold Cross Validation

Partition a dataset into k-folds and create VW train and test cache files for every fold. Distribute these cache files to the executor so that they can be used by the objective function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.