Giter VIP home page Giter VIP logo

ip-nonlinear-solver's Introduction

ip-nonlinear-solver

IMPORTANT: These algorithms have been integrated to SciPy library (release 1.1 or above) under the name of trust-constr. The version available there is more complete and better mantained.

Join the chat at https://gitter.im/ip-nonlinear-solver/GSoC2017

A trust-region interior-point method for general nonlinear programing problems. The implemetation is part my GSoC project for Scipy. A series of blog post describe different aspects of the algorithm and its use (link).

The implementation is being integrated to SciPy through the pull request:

scipy/scipy#7729

Instalation Guide

  1. Download repository from github
git clone [email protected]:antonior92/ip-nonlinear-solver.git
# or, alternatively:
# git clone https://github.com/antonior92/ip-nonlinear-solver.git
  1. Install package

Move into the downloaded directory and install requirements with:

pip install -r requirements.txt

In sequence, install package with:

python setup.py install
  1. Test instalation

The instalation can be tested by running:

python setup.py test

inside the downloaded directory.

Usage Example

Consider the following minimization problem:

min (1/2)*(x -2)**2 + (1/2)*(y - 1/2)**2, 
subject to: 1/(x + 1) - y >= 1/4;
            x >= 0
            y >= 0 

The code for solving this problem is presented bellow:

# Example
from __future__ import division
import numpy as np
from ipsolver import minimize_constrained, NonlinearConstraint, BoxConstraint

# Define objective function and derivatives
fun = lambda x: 1/2*(x[0] - 2)**2 + 1/2*(x[1] - 1/2)**2
grad = lambda x: np.array([x[0] - 2, x[1] - 1/2])
hess =  lambda x: np.eye(2)
# Define nonlinear constraint
c = lambda x: np.array([1/(x[0] + 1) - x[1],])
c_jac = lambda x: np.array([[-1/(x[0] + 1)**2, -1]])
c_hess = lambda x, v: 2*v[0]*np.array([[1/(x[0] + 1)**3, 0], [0, 0]])
nonlinear = NonlinearConstraint(c, ('greater', 1/4), c_jac, c_hess)
# Define box constraint
box = BoxConstraint(("greater",))

# Define initial point
x0 = np.array([0, 0])
# Apply solver
result = minimize_constrained(fun, x0, grad, hess, (nonlinear, box))

# Print result
print(result.x)

Documentation

The function minimize_constrained solves nonlinear programming problems. This functions minimizes an object function subject to constraints. These constraints can be specified using the classes NonlinearConstraint, LinearConstraint and BoxConstraint.

minimize_constrained

minimize_constrained(fun, x0, grad, hess='2-point', constraints=(),
                     method=None, xtol=1e-8, gtol=1e-8, sparse_jacobian=None,
                     options={}, callback=None, max_iter=1000, verbose=0)

    Minimize scalar function subject to constraints.

    Parameters
    ----------
    fun : callable
        The objective function to be minimized.

            fun(x) -> float

        where x is an array with shape (n,).
    x0 : ndarray, shape (n,)
        Initial guess. Array of real elements of size (n,),
        where ``n`` is the number of independent variables.
    grad : callable
        Gradient of the objective function:

            grad(x) -> array_like, shape (n,)

        where x is an array with shape (n,).
    hess : {callable, '2-point', '3-point', 'cs', None}, optional
        Method for computing the Hessian matrix. The keywords
        select a finite difference scheme for numerical
        estimation. The scheme '3-point' is more accurate, but requires
        twice as much operations compared to '2-point' (default). The
        scheme 'cs' uses complex steps, and while potentially the most
        accurate, it is applicable only when `fun` correctly handles
        complex inputs and can be analytically continued to the complex
        plane. If it is a callable, it should return the 
        Hessian matrix of `dot(fun, v)`:

            hess(x, v) -> {LinearOperator, sparse matrix, ndarray}, shape (n, n)

        where x is a (n,) ndarray and v is a (m,) ndarray. When ``hess``
        is None it considers the hessian is an matrix filled with zeros.
    constraints : Constraint or List of Constraint's, optional
        A single object or a list of objects specifying
        constraints to the optimization problem.
        Available constraints are:

            - `BoxConstraint`
            - `LinearConstraint`
            - `NonlinearConstraint`

    method : {str, None}, optional
        Type of solver. Should be one of:

            - 'equality-constrained-sqp'
            - 'tr-interior-point'

        When None the more appropriate method is choosen.
        'equality-constrained-sqp' is chosen for problems that
        only have equality constraints and 'tr-interior-point'
        for general optimization problems.
    xtol : float, optional
        Tolerance for termination by the change of the independent variable.
        The algorithm will terminate when ``delta < xtol``, where ``delta``
        is the algorithm trust-radius. Default is 1e-8.
    gtol : float, optional
        Tolerance for termination by the norm of the Lagrangian gradient.
        The algorithm will terminate when both the infinity norm (i.e. max
        abs value) of the Lagrangian gradient and the constraint violation
        are smaller than ``gtol``. Default is 1e-8.
    sparse_jacobian : {bool, None}
        The algorithm uses a sparse representation of the Jacobian if True
        and a dense representation if False. When sparse_jacobian is None
        the algorithm uses the more convenient option, using a sparse
        representation if at least one of the constraint Jacobians are sparse
        and a dense representation when they are all dense arrays.
    options : dict, optional
        A dictionary of solver options. Available options include:

            initial_trust_radius: float
                Initial trust-region radius. By defaut uses 1.0, as
                suggested in [1]_, p.19, immediatly after algorithm III.
            initial_penalty : float
                Initial penalty for merit function. By defaut uses 1.0, as
                suggested in [1]_, p.19, immediatly after algorithm III.
            initial_barrier_parameter: float
                Initial barrier parameter. Exclusive for 'tr_interior_point'
                method. By default uses 0.1, as suggested in [1]_ immediatly
                after algorithm III, p. 19.
            initial_tolerance: float
                Initial subproblem tolerance. Exclusive for
                'tr_interior_point' method. By defaut uses 0.1,
                as suggested in [1]_ immediatly after algorithm III, p. 19.
            return_all : bool, optional
                When True return the list of all vectors
                through the iterations.
            factorization_method : string, optional
                Method used for factorizing the jacobian matrix.
                Should be one of:

                - 'NormalEquation': The operators
                   will be computed using the
                   so-called normal equation approach
                   explained in [1]_. In order to do
                   so the Cholesky factorization of
                   ``(A A.T)`` is computed. Exclusive
                   for sparse matrices. Requires
                   scikit-sparse installed.
                - 'AugmentedSystem': The operators
                   will be computed using the
                   so-called augmented system approach
                   explained in [1]_. It perform the
                   LU factorization of an augmented
                   system. Exclusive for sparse matrices.
                - 'QRFactorization': Compute projections
                   using QR factorization. Exclusive for
                   dense matrices.
                - 'SVDFactorization': Compute projections
                   using SVD factorization. Exclusive for
                   dense matrices.

                The factorization methods 'NormalEquation' and
                'AugmentedSystem' should be used only when
                ``sparse_jacobian=True``. They usually provide
                similar results. The methods 'QRFactorization'
                and 'SVDFactorization' should be used when
                ``sparse_jacobian=False``. By default uses
                'QRFactorization' for  dense matrices.
                The 'SVDFactorization' method can cope
                with Jacobian matrices with deficient row
                rank and will be used whenever other
                factorization methods fails (which may
                imply the conversion to a dense format).

    callback : callable, optional
        Called after each iteration:

            callback(OptimizeResult state) -> bool

        If callback returns True the algorithm execution is terminated.
        ``state`` is an `OptimizeResult` object, with the same fields
        as the ones from the return.
    max_iter : int, optional
        Maximum number of algorithm iterations. By default ``max_iter=1000``
    verbose : {0, 1, 2}, optional
        Level of algorithm's verbosity:

            * 0 (default) : work silently.
            * 1 : display a termination report.
            * 2 : display progress during iterations.

    Returns
    -------
    `OptimizeResult` with the following fields defined:
    x : ndarray, shape (n,)
        Solution found.
    s : ndarray, shape (n_ineq,)
        Slack variables at the solution. ``n_ineq`` is the total number
        of inequality constraints.
    v : ndarray, shape (n_ineq + n_eq,)
        Estimated Lagrange multipliers at the solution. ``n_ineq + n_eq``
        is the total number of equality and inequality constraints.
    niter : int
        Total number of iterations.
    nfev : int
        Total number of objective function evaluations.
    ngev : int
        Total number of objective function gradient evaluations.
    nhev : int
        Total number of Lagragian Hessian evaluations. Each time the
        Lagrangian Hessian is evaluated the objective function
        Hessian and the constraints Hessians are evaluated
        one time each.
    ncev : int
        Total number of constraint evaluations. The same couter
        is used for equality and inequality constraints, because
        they always are evaluated the same number of times.
    njev : int
        Total number of constraint Jacobian matrix evaluations.
        The same couter is used for equality and inequality
        constraint Jacobian matrices, because they always are
        evaluated the same number of times.
    cg_niter : int
        Total number of CG iterations.
    cg_info : Dict
        Dictionary containing information about the latest CG iteration:

            - 'niter' : Number of iterations.
            - 'stop_cond' : Reason for CG subproblem termination:

                1. Iteration limit was reached;
                2. Reached the trust-region boundary;
                3. Negative curvature detected;
                4. Tolerance was satisfied.

            - 'hits_boundary' : True if the proposed step is on the boundary
              of the trust region.

    execution_time : float
        Total execution time.
    trust_radius : float
        Trust radius at the last iteration.
    penalty : float
        Penalty function at last iteration.
    tolerance : float
        Tolerance for barrier subproblem at the last iteration.
        Exclusive for 'tr_interior_point'.
    barrier_parameter : float
        Barrier parameter at the last iteration. Exclusive for
        'tr_interior_point'.
    status : {0, 1, 2, 3}
        Termination status:

            * 0 : The maximum number of function evaluations is exceeded.
            * 1 : `gtol` termination condition is satisfied.
            * 2 : `xtol` termination condition is satisfied.
            * 3 : `callback` function requested termination.

    message : str
        Termination message.
    method : {'equality_constrained_sqp', 'tr_interior_point'}
        Optimization method used.
    constr_violation : float
        Constraint violation at last iteration.
    optimality : float
        Norm of the Lagrangian gradient at last iteration.
    fun : float
        For the 'equality_constrained_sqp' method this is the objective
        function evaluated at the solution and for the 'tr_interior_point'
        method this is the barrier function evaluated at the solution.
    grad : ndarray, shape (n,)
        For the 'equality_constrained_sqp' method this is the gradient of the
        objective function evaluated at the solution and for the
        'tr_interior_point' method  this is the gradient of the barrier
        function evaluated at the solution.
    constr : ndarray, shape (n_ineq + n_eq,)
        For the 'equality_constrained_sqp' method this is the equality
        constraint evaluated at the solution and for the 'tr_interior_point'
        method this are the equality and inequality constraints evaluated at
        a given point (with the inequality constraints incremented by the
        value of the slack variables).
    jac : {ndarray, sparse matrix}, shape (n_ineq + n_eq, n)
        For the 'equality_constrained_sqp' method this is the Jacobian
        matrix of the equality constraint evaluated at the solution and
        for the tr_interior_point' method his is scaled augmented Jacobian
        matrix, defined as ``\hat(A)`` in equation (19.36), reference [2]_,
        p. 581.

    Notes
    -----
    Method 'equality_constrained_sqp' is an implementation of
    Byrd-Omojokun Trust-Region SQP method described [3]_ and
    in [2]_, p. 549. It solves equality constrained equality
    constrained optimization problems by solving, at each substep,
    a trust-region QP subproblem. The inexact solution of these
    QP problems using projected CG method makes this method
    appropriate for large-scale problems.

    Method 'tr_interior_point' is an implementation of the
    trust-region interior point method described in [1]_.
    It solves general nonlinear by introducing slack variables
    and solving a sequence of equality-constrained barrier problems
    for progressively smaller values of the barrier parameter.
    The previously described equality constrained SQP method is used
    to solve the subproblems with increasing levels of accuracy as
    the iterate gets closer to a solution. It is also an
    appropriate method for large-scale problems.

    References
    ----------
    .. [1] Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal.
           "An interior point algorithm for large-scale nonlinear
           programming." SIAM Journal on Optimization 9.4 (1999): 877-900.
    .. [2] Nocedal, Jorge, and Stephen J. Wright. "Numerical optimization"
           Second Edition (2006).
    .. [3] Lalee, Marucha, Jorge Nocedal, and Todd Plantenga. "On the
           implementation of an algorithm for large-scale equality
           constrained optimization." SIAM Journal on
           Optimization 8.3 (1998): 682-706.

NonlinearConstraint

NonlinearConstraint(fun, kind, jac, hess='2-point', enforce_feasibility=False)

   Nonlinear constraint

    Parameters
    ----------
    fun : callable
        The function defining the constraint.

            fun(x) -> array_like, shape (m,)

        where x is a (n,) ndarray and ``m``
        is the number of constraints.
    kind : {str, tuple}
        Specifies the type of contraint. Options for this
        parameter are:

            - ('interval', lb, ub) for a constraint of the type:
                lb <= fun(x) <= ub
            - ('greater', lb) for a constraint of the type:
                fun(x) >= lb
            - ('less', ub) for a constraint of the type:
                fun(x) <= ub
            - ('equals', c) for a constraint of the type:
                fun(x) == c
            - ('greater',) for a constraint of the type:
                fun(x) >= 0
            - ('less',) for a constraint of the type:
                fun(x) <= 0
            - ('equals',) for a constraint of the type:
                fun(x) == 0

        where ``lb``,  ``ub`` and ``c`` are (m,) ndarrays or
        scalar values. In the latter case, the same value
        will be repeated for all the constraints.
    jac : callable
        Jacobian Matrix:

            jac(x) -> {ndarray, sparse matrix}, shape (m, n)

        where x is a (n,) ndarray.
    hess : {callable, '2-point', '3-point', 'cs', None}
        Method for computing the Hessian matrix. The keywords
        select a finite difference scheme for numerical
        estimation. The scheme '3-point' is more accurate, but requires
        twice as much operations compared to '2-point' (default). The
        scheme 'cs' uses complex steps, and while potentially the most
        accurate, it is applicable only when `fun` correctly handles
        complex inputs and can be analytically continued to the complex
        plane. If it is a callable, it should return the 
        Hessian matrix of `dot(fun, v)`:

            hess(x, v) -> {LinearOperator, sparse matrix, ndarray}, shape (n, n)

        where x is a (n,) ndarray and v is a (m,) ndarray. When ``hess``
        is None it considers the hessian is an matrix filled with zeros.
    enforce_feasibility : {list of boolean, boolean}, optional
        Specify if the constraint must be feasible along the iterations.
        If ``True``  all the iterates generated by the optimization
        algorithm need to be feasible in respect to a constraint. If ``False``
        this is not needed. A list can be passed to specify element-wise
        each constraints needs to stay feasible along the iterations and
        each does not. Alternatively, a single boolean can be used to
        specify the feasibility required of all constraints. By default it
        is False.

LinearConstraint

LinearConstraint(A, kind, enforce_feasibility=False)

    Linear constraint.

    Parameters
    ----------
    A : {ndarray, sparse matrix}, shape (m, n)
        Matrix for the linear constraint.
    kind : {str, tuple}
        Specifies the type of contraint. Options for this
        parameter are:

            - ('interval', lb, ub) for a constraint of the type:
                lb <= A x <= ub
            - ('greater', lb) for a constraint of the type:
                A x >= lb
            - ('less', ub) for a constraint of the type:
                A x <= ub
            - ('equals', c) for a constraint of the type:
                A x == c
            - ('greater',) for a constraint of the type:
                A x >= 0
            - ('less',) for a constraint of the type:
                A x <= 0
            - ('equals',) for a constraint of the type:
                A x == 0

        where ``lb``,  ``ub`` and ``c`` are (m,) ndarrays or
        scalar values. In the latter case, the same value
        will be repeated for all the constraints.
    enforce_feasibility : {list of boolean, boolean}, optional
        Specify if the constraint must be feasible along the iterations.
        If ``True``  all the iterates generated by the optimization
        algorithm need to be feasible in respect to a constraint. If ``False``
        this is not needed. A list can be passed to specify element-wise
        each constraints needs to stay feasible along the iterations and
        each does not. Alternatively, a single boolean can be used to
        specify the feasibility required of all constraints. By default it
        is False.

BoxConstraint

BoxConstraint(kind, enforce_feasibility=False)

    Box constraint.

    Parameters
    ----------
    kind : tuple
        Specifies the type of contraint. Options for this
        parameter are:

            - ('interval', lb, ub) for a constraint of the type:
                lb <= x <= ub
            - ('greater', lb) for a constraint of the type:
                x >= lb
            - ('less', ub) for a constraint of the type:
                x <= ub
            - ('equals', c) for a constraint of the type:
                x == c
            - ('greater',) for a constraint of the type:
                x >= 0
            - ('less',) for a constraint of the type:
                x <= 0
            - ('equals',) for a constraint of the type:
                x == 0

        where ``lb``,  ``ub`` and ``c`` are (m,) ndarrays or
        scalar values. In the latter case, the same value
        will be repeated for all the constraints.
    enforce_feasibility : {list of boolean, boolean}, optional
        Specify if the constraint must be feasible along the iterations.
        If ``True``  all the iterates generated by the optimization
        algorithm need to be feasible in respect to a constraint. If ``False``
        this is not needed. A list can be passed to specify element-wise
        each constraints needs to stay feasible along the iterations and
        each does not. Alternatively, a single boolean can be used to
        specify the feasibility required of all constraints. By default it
        is False.

ip-nonlinear-solver's People

Contributors

antonior92 avatar gitter-badger avatar nmayorov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ip-nonlinear-solver's Issues

Project decisions

I think it is a good time for us to start discussing the solver interface. My general impression is that trying to fit the solver we are implementing into the minimize interface may not be practical. What do you guys think about creating a separate solver?

I see the minimize interface as very appropriate for the needs of unconstrained problems,
but the way to access the two constrained solvers (SLSQP and COBLYA) seems a little clumsy to me. And I think it would be much worse for our solver.

If we go for a different interface we need to think how the parameters are going to be passed by the user.


I see two possible ways to specify constraints:

One option is to formulate the problem as: (Option number 1)

min f(x)
subject to: h(x) = 0
            g(x) >= 0
            lb <= x <= ub

for callables f, g and g.

Another option is to formulate the problem as: (Option number 2)

min f(x)
subject to: h(x) = 0
         clb  <= c(x) <= cub
         lb <= x <= ub

I have seen this interface on several commercial solvers and I like it a lot. While is not usually how optimisation problems are described on text books it is very intuitive, and allow us to formulate constrained problems in a very easy way.

Test Results

Results of the SQP solver on the test problem ELEC.

This problem is described in [1], problem 2, and consist of given np electrons, find the equilibrium state distribution (of minimal Coulomb potential) of the electrons positioned on a conducting sphere.

The performance of other comercial solvers (according to [1]):
screenshot 2017-06-22 18 04 34

I choose this problem because the KNITRO package performs well on it, so our solver is expected to perform well on it as well, since the underlying principle is the same.

For np = 200 the performance of our implementation is:

time = 9.1 sec
f = 1.84392e+04 
c violation = 1.2e-15
optimality = 9.6e-03
niter = 125

As you can see the execution time is very competitive and the final value of f seems very close to the one obtained by the KNITRO package. The constraint violation also seems pretty ok.

I don't know how the optimality is defined on the previous table, but I defined it as the norm of the Lagrangian gradient ||grad L(x, lambda)||. I am not being able to reduce this optimality bellow
9.6e-3, after that the algorithm step gets too small and I cannot get more progress than that.
On the table from the paper, KNITRO seems to be able to reduce the optimality down to 3.9e-7,
but again, maybe it is only because of different definitions.

Besides that, the decay of this optimality measure ||grad L(x, lambda)|| seems pretty consistent along the interactions.

optimality_elec

My initial desire to design small test of my own have not been very successful. Mostly because I did not manage to think meaningful test by myself. So I have been focusing on performing test on problems from CUTEst collection. Nevertheless, I want to include some unit tests for the method. What do you guys think it would be the best option in this case? Implement methods from CUTEst in Python and use them for the unit tests?

[1]    Dolan, Elizabeth D., Jorge J. Moré, and Todd S. Munson. "Benchmarking optimization software with COPS 3.0." Argonne National Laboratory Research Report (2004).

scikit-sparse usage

Just an issue to discuss how scikit-sparse usage should be handled in scipy release.

Which way would be the best in your opinion?

Code style

I was reading your code from time to time and some things drew my attention. I decided to create a separate issue for that.

  1. Describing arrays as "unidimensional". I suggest an alternative approach: mention array shapes in the type annotation (as usually done), previously introducing the notation (like n, m, etc.) in the docstring body (and often it's OK to even omit it and introduce names right in the shape description). As for the description --- think of something more specific and precise. Example:
b : array_like, shape (m,)
   Right-hand side of the constraint equation.

I understand that it looks like overkill somewhat, but I think it is a more intelligent way to handle things in this situation.

  1. The same goes for return arrays: writing their shape will make it easier to understand their
    relation to the problem formulation.
  2. This is rather pedantic: the PEP convention prescribes to use one-line summary docstrings and then go into more details if necessary. To my excuse, I was asked to make such fixes by charris back then, for me it was OK and I even managed somehow to achieve that. Even worse, in theory it should start on the same line as the triple quotes.
  3. From the same category: this story about "comment with a period" vs "comment without a period". If you have any specific reasons not to use periods --- don't bother.
  4. Sometimes you didn't use += (etc) operators when possible, it looks a bit odd to my eye. Not sure about efficiency by the way, it could make very small difference because of temporary memory allocation.
  5. Brackets around not operands in if are usually unnecessary.
  6. Another preferential thing: I don't like using len applied to numpy arrays, I would personally use shape with the required index. For me it is more clearly separates numpy structures from pure Python structures and it is more explicitly.

Course of action

Ok, so I did some research today and decided to take a bottom-up approach in implementing the algorithm. I intend to follow the sequence:

  • 1. Implement the projected CG;
  • 2. Implement the Trust-Region QP problem with Relaxation Method;
  • 3. Implement SQP Trust-Region method for the barrier problem;
  • 4. Implement nonlinear (Interior point method) problem as a series of barrier problems.
  • 5. Integrate solver to scipy.
  • 6. Implement quasi-Newton options.

Each step uses the previous one as a substep and I will try to create layers of abstractions in order to better cope with that. I think each one of the steps 2, 3 and 4 should take approximately a week to implement.

I already created a wiki for 2. "Implement the Trust-Region QP problem with Relaxation Method".

What do the two of you think about this plan?

NonlinearConstraint, hess(x,v)

I don't understand it very well why there is "v" in hess(x,v) which is the hessian matrix of the constraint functions. For me, it's just a function about "x". Could you tell me how to derive that? Based on the example you provide.

Comparison between EQP solvers

In the last two days I have been busy installing CUTEst and pycutestmgr (which took way longer than I was expecting) and performing tests. I compared the direct factorization method with two approaches using projected CG:

  • One using the normal equation approach;
  • The other using the augmented system approach;

I performed experiments on the following set of problems:
CVXQP1_S, CVXQP2_S, CVXQP3_S, CVXQP1_M, CVXQP2_M, CVXQP3_M, CVXQP1_L, CVXQP2_L, CVXQP3_L, CONT-050, CONT-100, DPKL01, MOSARQP1, DUAL1, DUAL2, DUAL3, DUAL4, PRIMAL1, PRIMAL2, LASER.The description of the problems can be found in [1] (convex QP problems). Each problem will be represented by a single circle on the following plots.

I compared the methods regarding the execution time and the error of optimality measures. I used CG methods with very low tolerances.

About the execution time: The time it takes to factorize the augmented system is much smaller than the time it takes to solve the system by direct factorization of the KKT matrix (About 1000x smaller for large problems) as illustrated in the graph bellow.
screenshot 2017-06-02 15 40 11

Furthermore, the time to solve the problem using any of the CG approaches is usually much smaller than the time to solve them by using direct factorization.
screenshot 2017-06-02 15 40 25

The concern about large roundoff errors when using the CG approach is not unfounded. The next scatter plot shows that for some of the large problems both the error in the equality constraint
and the Lagrangian gradient norm are very large when they should be zero.

screenshot 2017-06-02 15 39 57

References

[1] Istvan Maros and Csaba Meszaros "A repository of convex quadratic programming problems",
Optimization Methods and Software (OMS) Volumes 11&12 (December, 1999), 671-681

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.