Giter VIP home page Giter VIP logo

deep_inv_opt's Introduction

Respository Overview

This repository provides a Python package called deep_inv_opt, which is short for "deep inverse optimization." The package solves "inverse optimization" problems in a way that makes use of deep learning techniques. The associated paper, Deep Inverse Optimization, has been accepted to CPAIOR 2019 and is available on arXiv. The repository also contains scripts to generate plots from the paper.

Paper abstract. Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.

How "deep inverse optimization" works

In forward optimization, we start with an optimization model (such as a linear program) and aim to compute a set of decision variables x* that are optimal with respect to that model. For example, given cost vector c and constraint coefficients A and b, we might aim to compute an x that is optimal with respect to the linear program

In inverse optimization (IO), we do not start with a fully-specified optimization model. Instead, are given a set of observed decision variables xtarget that are presumed to be optimal with respect to some unknown optimization model. Our aim is to recover (to learn) that optimization model from the observed decisions. For more background on IO, see Ahuja and Orlin (2001) or Chan, Lee and Terekhov, D. (2018).

Our method, called deep inverse optimization, provides a gradient-based approach to solving inverse optimization problems. The idea is depicted below for the simplest case, where one wants to recover a cost vector c that is consistent with an observed target. Even if our current cost vector is wrong (leads to x* that is inconsistent with the target), we can backpropagate a gradient to c and adjust it using gradient descent.

The gradient is taken with respect to a 'loss' (a measure of discrepancy, such as sum of squared errors). The "optimization steps" shown above are the steps of an actual LP solver, in this case the interior point method. Running the interior point algorithm can be considered the forward process, and computing the gradient can be considered the backward process. The combined forward-backward process is depicted in Figure 2 of our paper, reproduced below.

This figure depicts an LP that is parametric both in features u (representing time or other varying conditions of the problem) and in weights w (the coefficients we wish to learn so as to match observed decisions). See the paper for details.

Installing the deep_inv_opt package

The package was developed using the following dependencies:

  • Anaconda Python 3.6 64-bit
  • PyTorch 1.0
  • Numpy 1.14
  • Matplotlib 2.2

Clone the repository and install the package in your environment. For example, to install it as developer mode, run

$ python setup.py develop

You should now be able to import deep_inv_opt from any location:

>>> import deep_inv_opt as io
>>> io.tensor([[0.1, 1.2]])
tensor([[ 0.1000,  1.2000]], dtype=torch.float64)

You can optionally run the unit tests:

$ nosetests
....
----------------------------------------------------------------------
Ran 4 tests in 22.156s

OK

Using the deep_inv_opt package

See examples by browsing to the examples/ directory in this Github repository.

Authors

  • Yingcong Tan - Concordia University
  • Andrew Delong
  • Daria Terekhov - Concordia University

References

  1. Ahuja, R.K. and Orlin, J.B. Inverse optimization. Operations Research, 49(5):771โ€“783 (2001)

  2. Chan, T.C.Y., Lee, T. and Terekhov, D. Goodness of fit in inverse optimization. Management Science (2018)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.