Giter VIP home page Giter VIP logo

optizelle's Introduction

Optizelle

Brought to you by

OptimoJoe

Optizelle

Optizelle [op-tuh-zel] is an open source software library designed to solve general purpose nonlinear optimization problems of the form

min f(x) min f(x) st g(x)=0
min f(x) st h(x)≥0 min f(x) st g(x)=0, h(x)≥0

It features

  • State of the art algorithms
    • Unconstrained -- steepest descent, preconditioned nonlinear-CG (Fletcher-Reeves, Polak-Ribiere, Hestenes-Stiefel), BFGS, Newton-CG, SR1, trust-region Newton, Barzilai-Borwein two-point approximation
    • Equality constrained -- inexact composite-step SQP
    • Inequality constrained -- primal-dual interior point method for cone constraints (linear, second-order cone, and semidefinite), log-barrier method for cone constraints
    • Constrained -- any combination of the above
  • Open source
    • Released under the 2-Clause BSD License
    • Free and ready to use with both open and closed sourced commercial codes
  • Multilanguage support
    • Interfaces to C++, MATLAB/Octave, and Python
  • Robust computations and repeatability
    • Can stop, archive, and restart the computation from any optimization iteration
    • Combined with the multilanguage support, the optimization can be started in one language and migrated to another. For example, archived optimization runs that started in Python can be migrated and completed in C++.
  • User-defined parallelism
    • Fully compatible with OpenMP, MPI, or GPUs
  • Extensible linear algebra
    • Supports user-defined vector algebra and preconditioners
    • Enables sparse, dense, and matrix-free computations
    • Ability to define custom inner products and compatibility with preconditioners such as algebraic multigrid make Optizelle well-suited for PDE constrained optimization
  • Sophisticated Control of the Optimization Algorithms
    • Allows the user to insert arbitrary code into the optimization algorithm, which enables custom heuristics to be embedded without modifying the source. For example, in signal processing applications, the optimization iterates could be run through a band-pass filter at the end of each optimization iteration.

Download

For precompiled, 64-bit packages, please download one of the following

Platform Package Interfaces
Windows
macOS
Linux

Installation should be as easy as opening the package and following the instructions found therein.

Documentation

We provide a full set of instructions for building, installing, and using Optizelle in our manual (letter,a4.)

Source

For the source, please download a zipped archive of our code. For power users, we provide public access to our git repository on our Github page. In order to clone the Optizelle repository, use the command

git clone https://github.com/OptimoJoe/Optizelle.git

Building and installation may be as simple as executing the following commands from the base Optizelle directory:

  1. mkdir build
  2. cd build
  3. ccmake ..
  4. Configure the build
  5. make install

For more detailed instructions, please consult our documentation.

Support

For general questions, please visit our community forum. In addition, we provide paid support and consulting for Optizelle. If you are interested, please contact us.

Contributing

We appreciate community contributions to Optizelle! If you notice a bug, please file a report on our issues page. Alternatively, for more in-depth contributions, clone our repository and send a pull request via our Github page. Finally, we appreciate help in answering general user questions on our community forum.

optizelle's People

Contributors

ccober6 avatar josyoun avatar jschueller avatar sscoll avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

optizelle's Issues

Fix third party library installation on Windows

On Windows, CMake puts all of the dlls in the bin directory and not in lib. Since Windows automatically looks in the executable directory for any dlls, this makes sense. Unfortunately, this messes up the rest of my installation since I'm used to libraries being in lib and not bin. Really, I don't want two different build setups for Windows and POSIX systems. In any case, what's supposed to happen now is that CMake should install the third party libraries locally in the build directory and then move the dlls from bin to lib. Then, everything should work. Unfortunately, that magic is using the old directory name of installed instead of thirdparty. This needs to be fixed.

Build failure: ‘const class Json::Value’ has no member named ‘isUInt64’

On branch develop, jsoncpp 0.6.0, and flag -std=c++14, I get the following failure:

[  6%] Building CXX object src/cpp/optizelle/CMakeFiles/optizelle_cpp.dir/json.cpp.o
/home/david/gits/Optizelle/src/cpp/optizelle/json.cpp: In function ‘Optizelle::Natural Optizelle::json::read::natural(const Optizelle::Messaging&, const Json::Value&, const string&)’:
/home/david/gits/Optizelle/src/cpp/optizelle/json.cpp:94:42: error: ‘const class Json::Value’ has no member named ‘isUInt64’
                 if(json.isUInt() || json.isUInt64())
                                          ^
/home/david/gits/Optizelle/src/cpp/optizelle/json.cpp:98:46: error: ‘const class Json::Value’ has no member named ‘isInt64’
                 else if(json.isInt() || json.isInt64()) {

I have manually indicated jsoncpp lives at /usr/lib64/jsoncpp.so

Add performance of augmented system solves to our output

Right now, we don't output any performance behavior of our augmented system solves. This makes it difficult to determine whether or no we've implemented our preconditioners properly. As such, we need to out information about how many iterations we took and the error we achieve during our augmented systems solves.

Fix the specification of alpha0

Right now,on line search methods, we search a distance of 2 alpha0 in front of the current step. It's not clear to me why this isnt' just alpha0. Basically, it's causing me grief when trying to set an initial alpha0 since I normally think of how big I want the intial steps to be and I always forget the factor of 2. If it's not important, we should really just make the maximum search be alpha0.

Reduce the number of Krylov iterations spent on trust-region algorithms on a rejected step

When a trust-region method rejects a step, we don't move, reduce the size of the trust-region, and then compute the Krylov solve again. Unless the trust-region starts cutting into the Cauchy point, it turns out that many of these iterations are exactly the same as they were before. Now, for small problems, this is wasted compute time and not a big deal. However, for something like a reduced-space method for parameter-estimation, this is two forward solves and two adjoints solves per iteration. For Gauss-Newton, this is one forward solve and one adjoint solve per iteration. For PDE solves, this is very expensive and a complete waste.

In order to fix this, we could just checkpoint the Krylov solve. This would improve compute time and sacrifice memory. In theory, we could make this work, but I'not all that keen on trying to maintain restart infrastructure on the Krylov methods.

As an alternative, I'm pretty sure that adding a dog-leg safeguard would fix the problem. Basically, we compute the Cauchy point and then we compute the result from the truncated Krylov method. Then, we do a dog-leg step from the truncated-Krylov step to the Cauchy point. Due to how truncated-CG works, the dogleg path is monotonically increasing in norm and the the model is monotonically nonincreasing. As such, we can just run a standard dog-leg algorithm on it. This means that we could cut back the computed step in a much more efficent manner since we'd avoid new Krylov iterations. Eventually, we'd retreat to the Cauchy point, which means that the convergence results would still hold. In addition, we should interfere with our high-order convergence results since, in theory, the trust-region wouldnt be active if we're close to the solution. In any case, this is pretty easy to do for the unconstrained algorithms. For the composite step SQP algorithm, it's trickier since we may want to cut back the normal step and tangential step independently. I'm pretty sure it's workable, but requires some thought.

Add support for absolute stopping tolerances

Right now, all of our stopping tolerances are relative. Most of the time, this is ok, unless we're solving a sequence of optimization problems in a row and we use a warm start. In this case, we're likely still feasible, so reducing this tolerance again is difficult. As such, the best option in this case is to use an absolute tolerance. Also, sometime we just know how small we want things and using an absolute tolerance is the right thing to do in this context.

Document the scaled identity Hessian approximation better

I need to add better documentation about the ScaledIdentity option for H_type. Basically, it's what we use in order to implement a trust-region steepest descent method. Really, it's just the identity, but we scale it so that the steepest descent step is twice as large as the trust-region radius. This forces a step in the steepest descent direction with a size equal to the trust-region. It's not that complicated, but it's not in the manual.

Unify the restart format for the SQL vector spaces

At the moment, I don't actually have the restart code for the MATLAB/Octave SQL vector space. Part of the issue is that I store the internal information about the cones differently than what I use in C++. This is due to how we compute math operations in C++ versus how this is occurs in MATLAB/Octave. Nevertheless, we should unify the two schemes, so that we can move between MATLAB/Octave and C++ codes.

CCmake is picking up Python3

CCmake by default picks Python3, and fails. It would be good if it were to choose the right one (perhaps from which python2.7, with the added bonus that it would pick up if I am on a virtualenv. No idea how to do this in Cmake, though.).

My OS is Fedora 22, GCC 5.1.1, (c)cmake 3.3.2. I have both Python 2.7 and 3.4 installed.

Add the ScalarValuedFunctionModifications to the function bundle for non-C++ languages

The ScalarValuedFunctionModifications are what we use to build things like the the merit function or the gradient of the Lagrangian. At the moment, they sit inside the function bundle and they are initialized when we call getMin. In any case, these functions are freely accessible in C++, but not in MATLAB and Python. This makes calculating things like the gradient of the Lagrangian difficult for things like user-defined stopping conditions. Certainly, we could just add items like grad_stop to the optimization state, but we're probably better off just giving access to the raw modifications in case we need a different one.

Build failure: redefinition of structs

I am getting the following errors when compiling. I am using release mode, and I have manually included the --std=c++11 flag. The OS is Fedora 22, GCC 5.1.1, (c)cmake 3.3.2:

Scanning dependencies of target equality_constrained
[ 41%] Building CXX object src/unit/restart/CMakeFiles/equality_constrained.dir/equality_constrained.cpp.o
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:21:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,XX> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:38:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,YY> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:55:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,ZZ> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
src/unit/restart/CMakeFiles/constrained.dir/build.make:62: recipe for target 'src/unit/restart/CMakeFiles/constrained.dir/constrained.cpp.o' failed
make[2]: *** [src/unit/restart/CMakeFiles/constrained.dir/constrained.cpp.o] Error 1
CMakeFiles/Makefile2:392: recipe for target 'src/unit/restart/CMakeFiles/constrained.dir/all' failed
make[1]: *** [src/unit/restart/CMakeFiles/constrained.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:21:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,XX> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:38:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,YY> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:7:0:
/home/david/gits/Optizelle/src/unit/restart/restart.h:55:16: error: redefinition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,ZZ> {
                ^
In file included from /home/david/gits/Optizelle/src/unit/restart/equality_constrained.cpp:4:0:
/home/david/gits/Optizelle/src/cpp/optizelle/vspaces.h:184:16: error: previous definition of ‘struct Optizelle::json::Serialization<Real, Optizelle::Rm>’
         struct Serialization <Real,Rm> {
                ^
[ 42%] Linking CXX executable simple_equality
[ 42%] Built target simple_equality
[ 44%] Linking CXX executable simple_constrained
src/unit/restart/CMakeFiles/equality_constrained.dir/build.make:62: recipe for target 'src/unit/restart/CMakeFiles/equality_constrained.dir/equality_constrained.cpp.o' failed
make[2]: *** [src/unit/restart/CMakeFiles/equality_constrained.dir/equality_constrained.cpp.o] Error 1
CMakeFiles/Makefile2:429: recipe for target 'src/unit/restart/CMakeFiles/equality_constrained.dir/all' failed
make[1]: *** [src/unit/restart/CMakeFiles/equality_constrained.dir/all] Error 2
[ 44%] Built target simple_constrained
[ 45%] Linking CXX executable sdpa_sparse_format
[ 45%] Built target sdpa_sparse_format
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2

Automate the rescaling of functions entering the merit function

On something like an inequality constrained problem, it's possible to wash out either the constraints or the objective due to poor scaling. Recall, the merit function has the form f(x)-mu barr(h(x)). If, say, h(x) is really, really big and f(x) is small, then the algorithm gets confused and only works on the inequality constraints. In theory, in infinite dimensions, this'll be balanced over time. However, with limited precision, it becomes a problem. One simple fix is to just balance f(x) versus barr(h(x)) at the start. Basically, work with alpha f(x)-mu barr(h(x)) instead where alpha is some positive constant. As far as equality cosntraints, this may not be an issue since the penalty term may balance things enough. I need to think more about that. Nonetheless, there definately is an issue with the inequality constraints.

Add vector space tests

I need to add some tests that verify the correctness of a user-defined vector space. As it turns out, this is a major source of errors for myself and others than I've interacted with. Basically, for simple functions, we tend to get them wrong much of the time, especially the inequality constrained functions. In order to help diagnose these errors, we need a series of tests to try and very that we do things like add numbers correctly. Certainly, these can't cover all cases, but I end up doing something similar when I work on a new problem and it's about time I formalized it.

Get rid of the Brents line search option

Well, a long time ago I put in the option for a Brents line search since I told myself I was going to implement one. Some day, I will. Till that time, the Brent's line search option needs to be removed.

Investigate whether or not the initial Hessian approximation for quasi-Newton methods needs to be scaled

On our qusi-Newton methods like SR1 and BFGS, we start with an initial Hessian approximation of the identity. Most of the time, this works ok, but on badly scaled problems, I'm unhappy with the performance. Basically, if the gradient is really, really small the truncated Krylov method will take a full steepest descent step, which is also really, really small. That means we make no progress. Now, it may be that the next step performs better because now we have Hessian information, but I need to check this and then determine whether or not we'd be better off just scaling the initial identity from the start.

Documentation not available

Dear Authors,

Seems like the links to the documentation pdfs are broken. Could you possibly re-upload them?

Thanks a lot!
-Petar

Checklist before pushing to master and releasing v1.2

In theory, we're getting close to doing a new major release. I've been closing out as many existing issues as possible, but there's still a few more things that need to occur:

  1. Clarify what license documentation is required when dynamic linking to MATLAB. Basically, we compile mex files, but do so by doing the linking ourselves. I've contacted Mathworks and they've been extremely slow in resolving this issue.
    Fixed on commit 99b0ac5. Under the deployment addendum, we fall under user created files and are free to distribute MEX files.
  2. Clarify under what license terms Optizelle can distribute dlls from MinGW such as libwinpthread-1.dll, libgcc_s_seh-1.dll, libstdc++-6.dll, libgfortran-3.dll, libquadmath-0.dll. We can dynamic link them without any problem, but I'd like to distribute them directly with the installers since that means the user won't have to install MinGW. Mostly, this is for MATLAB only users who likely don't have other compilers on their system.
    Fixed on commit 99b0ac5. I believe we're covered under the GCC runtime library exception since we don't modify any parts of GCC.
  3. Fix the CMake scripts to include all license information from each dependency. Depending on how we compile things, we have different license obligations.
    Fixed on commit 99b0ac5. I did this a little differently. Basically, there's now a licenses directory that has everything. Upon configuration, these licenses are combined into a single LICENSE.txt that gets installed. It's also what gets displayed by the installers.
  4. Write the installer scripts for OS X
    Fixed on commit d0bfcdc. Though, candidly, there were some issues to work out and it's best to look toward commit df4a8e8
  5. Upload binaries for Linux (32,64), Windows (32,64), OS X (64)
    Uploaded. Though, MATLAB has gone to supporting only 64-bit, so we're only supporting 64-bit as well. Candidly, most dev machines are on 64-bit, so this shouldn't be an issue. The code should be 32-bit safe, so if someone really needs it, they should be able to compile it themselves.
  6. Update the README to point to the installers as well
    Fixed on commit df4a8e8

Document how to run CMake without ccmake

In many situations, we want to automate the build and installation of Optizelle. In that case, running ccmake or cmake-gui isn't practical. Rather, we need to use cmake and correctly confguration everything in a one shot. In truth, this isn't that hard to do, but we need some documentation on how to accomplish this. Mostly, due to the moderately complicated nature of the build setup, certain flags needs to be specified before others.

Fix extension for Python C-API libraries

On POSIX systems, C-API Python libraries have the form Library.so. On Windows, this must be Library.pyd not Library.dll. Right now, we use Library.dll, which is incorrect.

Fix the output for KryIter on failed line-searches

Right now, we report KryIter in our slightly more verbose output. This is the number of Krylov iterations we used in the linear system solve related to something like Newton-CG. Now, when we fail a line-search and need to keep searching we don't actually do more Krylov iterations. We just truncate the step and keep searching. However, KryIter keeps showing numbers like we did work. Technically, the variable krylov_iter_total is correct, but the output is confusing and needs to be fixed.

Increase trust-region when the quasinormal step hits the boundary

We really should increase the trust-region when the quasinormal step hits the boundary, but the tangential doesn't, and we have a good ratio on our actual vs predicted reduction. Basically, we need to add code that tracks what happened with the dogleg method and then add an extra case to checkStep.

MATLAB examples failing due to error with error

Any of the MATLAB examples I try to run fail with similar errors to this:

simple_inequality('test_optizelle.out')
Error using error
Too many output arguments.

Error in setupOptizelle>@(x)error(x) (line 167)
    'error',@(x)error(x));

Error in InequalityConstrainedStateReadJson (line 11)
    self=InequalityConstrainedStateReadJson_(X,Z,msg,fname,state);

Error in simple_inequality>main (line 82)
    state=Optizelle.json.InequalityConstrained.read( ...

Error in simple_inequality (line 10)
    main(fname);

Maybe I'm doing something wrong? Or maybe it is an issue with MATLAB 2014a?

Add unit test for when we exit due to the nominal tolerances being too tight

When we reject both the Newton step and the Cauchy point, we tighten all of our augmented solve tolerances and try again. Sometimes, due to numerical error, these tolerances become essentially zero, which means we can't make progress and need to exit. We currently have a safety check, but we need a unit test to verify it.

Add unit test for the orthogonality check in truncated CD

If we do an inexact nullspace projection, it's possible that we lose orthogonality in Krylov subspace vectors in the truncated conjugate direction algorithm. To compensate, we add an extra stopping criteria for when we lose orthgonality. We need a unit to test to verify this functionality.

Set CI up

It is always nice to have a CI system. As the building requires some tuning (finding libraries) it may be a bit tricksy, but I think it is definitely worth the effort.

I will see what I can manage in the following days.

CMake build scripts don't add dependency on supporting files when running unit tests

Right now, there's no dependency on the supporting files for the unit tests. That means when supporting files for the unit test are modified, we don't get the updates to the unit tests. Two places where this occurs are in the perceptron and computation_caching examples. The short fix is to delete these directories in the build folder, but it'd be nice do do this correctly.

Test failures: newton_cg and newton_cg_backtracking

I have successfuly built and installed Optizelle, but make test reports two errors:

 171 - Solution_to_cpp_/home/david/Downloads/Optizelle-master/src/examples/simple_quadratic_cone/newton_cg.json (Failed)
    173 - Solution_to_cpp_/home/david/Downloads/Optizelle-master/src/examples/simple_quadratic_cone/newton_cg_backtracking.json (Failed)

I have activated Python, C++, and OpenMP bindings. I have only tested Release mode with flags O2 and O3, and march=native and default.

I am on a Fedora 20 box, with gcc 4.8.3, jsoncpp 0.6.0 release 0.11.rc2 from the repositories, and cmake 2.8.12.2.

Add unit test for MINRES where it uses only a rank-1 projection and we exit immediately

Sometimes, such as when we have a rank-1 nullspace projection, the second Krylov vector lies in the nullspace of the preconditioner (projection). If we need to exit early, due to the trust-region radius being hit or due to a detected NaN, we don't have a search direction to fall back on in order to calculate our truncated solution. In this case, we can calculate our truncated solution based on our first Krylov vector, but we need special code to do so. We need a unit test to test this functionality.

jsoncpp cannot be found

I am trying to install Optizelle with C++ support on a Fedora 20 box. I have installed jsoncpp and jsoncpp-devel, but CMake is not able to find it:

 CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
   Could N CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
   Could NOT find JsonCpp (missing: JSONCPP_LIBRARY)
 Call Stack (most recent call first):
   /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
   src/cmake/Modules/FindJsonCpp.cmake:20 (find_package_handle_standard_args)
   src/thirdparty/CMakeLists.txt:54 (find_package)
OT find JsonCpp (missing: JSONCPP_LIBRARY)
 Call Stack (most recent call first):
   /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
   src/cmake/Modules/FindJsonCpp.cmake:20 (find_package_handle_standard_args)
   src/thirdparty/CMakeLists.txt:54 (find_package)

I have a directory called /usr/include/jsoncpp/json containing the files:

autolink.h  config.h  features.h  forwards.h  json.h  reader.h  value.h  writer.h

I have tried to set the option JSONCPP_INCLUDE_DIR to that path explicitely, but it doesn't work either.

Rewrite IO procedures and overhaul the Messaging object

There are actuall two separate issues that relate to the same piece of code.

First, the error reporting pieces of the code need to be fixed. Right now, Optizelle uses a Messaging object to accomplish two things. First, it's what writes general debugging output to the screen. Second, it's what we use to report errors. The second part has been having some major issues. In C++, this wasn't such a big deal since we could just write out the error to stderr and then quit. However, for MATLAB/Octave and Python, this has been a huge burdon. The issue is that in these languages we tend to develop incrementally and allow runtime errors help us fix our functions. Right now, Optizelle is gobbling these errors and then trying to force them through the Messaging object, which is the wrong thing to do. Instead, we need to fix the code so that wrapper language has the ability to pass back its natural error reporting mechanisms. In Python, these are exceptions and in MATLAB/Octave this is output from error.

The second issue relates to writing restart files in parallel. At the moment, the write_restart function will automatically write a json formatted restart file to file. This is a problem because in an MPI programit turns out that multiple machines may try to write the same file and we only want the head to do it. Now, I've added enough hooks at this point where we can safely write things like vectors in parallel to disk. However, this last piece of writing the json file could cause problems. As such, we require a generic user-defined interface to specify how things are writen to file. In the MPI case, we just check if we're the head node. Otherwise, we just write.

This last issue is related to the Messaging function because it suggests that we need a generic IO operator that handles these things properly. As such, I propose writing a generic IO opertor that just read and writes. For something like stdio, this function would just write to stdout and read from stdin. However, for things like MPI, we could put in the appropriate checks. It also means that we could receive input and output results over a socket in a much more straightforward way. Finally, this means that we would eliminate the error function from the messaging object and instead fix the error reporting mechanism to whatever is most appropriate for the client language. Again, for C++ this would be exceptions, in Python this would also be excpetions, in MATLAB/Octave this would be the error function.

Document that steepest-descent really is preconditioned steepest-descent

I need to add documentation to denote that steepest-descent really is a preconditioned stepest-descent step. Meaning, the step that we actually take is dx = (PH)grad(f)(x). This is actually a good way to hijack the algorithms to implement a custom search direction. Basically, if we want to implement a hard inverse or approximation of the inverse of the Hessian, the best way to use it is to add it as a preconditioner and then call steepest descent. This avoids the truncated-Krylov machinery if it's undesired.

Add a unit test with three distinct vector spaces to C++

Add a unit test with three distinct vector spaces to our C++ unit tests. Due to how templating works, it's possible that we used the wrong vector space in our code (X as opposed to Y or Z), and it's not detected by the compiler because we use the same vector space in each case, such as Rm. To eliminate these errors, we need three different vectors spaces with unique classes for X, Y, and Z in a unit test. Then, the compiler should correctly find these errors.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.