Giter VIP home page Giter VIP logo

esa / pagmo2 Goto Github PK

View Code? Open in Web Editor NEW
790.0 33.0 159.0 58.33 MB

A C++ platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.

Home Page: https://esa.github.io/pagmo2/

License: GNU General Public License v3.0

CMake 0.10% C++ 99.89% Shell 0.01% PowerShell 0.01%
optimization optimization-algorithms optimization-methods optimization-tools parallel-computing parallel-processing evolutionary-algorithms multi-objective-optimization stochastic-optimizers genetic-algorithm

pagmo2's Introduction

pagmo

Build Status Build Status Build Status Code Coverage

Anaconda-Server Badge

Join the chat at https://gitter.im/pagmo2/Lobby

DOI DOI

IMPORTANT NOTICE: pygmo, the Python bindings for pagmo, have been split off into a separate project, hosted here. Please update your bookmarks!

pagmo is a C++ scientific library for massively parallel optimization. It is built around the idea of providing a unified interface to optimization algorithms and to optimization problems and to make their deployment in massively parallel environments easy.

If you are using pagmo as part of your research, teaching, or other activities, we would be grateful if you could star the repository and/or cite our work. For citation purposes, you can use the following BibTex entry, which refers to the pagmo paper in the Journal of Open Source Software:

@article{Biscani2020,
  doi = {10.21105/joss.02338},
  url = {https://doi.org/10.21105/joss.02338},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2338},
  author = {Francesco Biscani and Dario Izzo},
  title = {A parallel global multiobjective framework for optimization: pagmo},
  journal = {Journal of Open Source Software}
}

The DOI of the latest version of the software is available at this link.

The full documentation can be found here.

Upgrading from pagmo 1.x.x

If you were using the old pagmo, have a look here on some technical data on what and why a completely new API and code was developed: https://github.com/esa/pagmo2/wiki/From-1.x-to-2.x

You will find many tutorials in the documentation, we suggest to skim through them to realize the differences. The new pagmo (version 2) should be considered (and is) as an entirely different code.

pagmo2's People

Contributors

acxz avatar ahmedr2001 avatar ax3l avatar bennybp avatar bidski avatar bluescarni avatar ckaldemeyer avatar coolrunning avatar darioizzo avatar fuzihaofzh avatar hulucc avatar jakirkham avatar johanmabille avatar jschueller avatar jslee02 avatar jsoref avatar kirbyherm avatar kishmanani avatar lyskov avatar michiboo avatar mkkim1129 avatar mlooz avatar mlopez-ibanez avatar ow97 avatar sceki avatar schwarzschildx avatar sylvaincorlay avatar tmiasko avatar wjakob avatar yawgmoth90 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pagmo2's Issues

API improvements

Writing the getting started tutorials a few things emerged that may be easy to implement:

  • Providing some sort of iterator in the archipelago class as to allow ranged based for loops
  • Changing the meaning of the seed in the archipelago constructor so that it becomes a generator for seeds of the various populations in the islands. This would avoid identical pops to be constructed in the different islands (#74)
  • Provide a pagmo.hpp header, and maybe the algorithms.hpp and problems.hpp headers.
  • Make some sort of minimal unified log for NLOPT algorithms.
  • Fix the behaviour of population.push_back so that its checks are consistent with population.set_xf. And avoid crashes
  • The kwarg in the archipelago constructor could be pop_size and not size (#74)
  • The line res = [isl.get_population().champion_f for isl in archi] is a quite common pattern and a dedicated method should maybe be added to the archi get_champions? (#75)

estimate_sparsity

The function estimate_sparsity currently computes the fitness by changing each component of the decision vector x by the same amount (default 1e-8). This causes issues for problems that are not scaled in a proper way.

A possible fix would be to pass instead the lower and upper bounds of x as arguments, in addition to a number N: the function could compute a random x within the bounds, then for every component of x change it N times (one by one) within the respective bounds, and check if the fitness components are constant in all of the N points obtained.

Iteration

Just gonna brain dump some iterative improvements:

  • implement the extract functionality for the island (currently missing)
  • in pygmo, document special members such as __len__, __getitem__, __copy__, __deepcopy__ that are implemented in some classes (e.g., island, archipelago, population, etc.). This can be done by adding the sphinx directive :special-members: to the autoclass directive, but then this brings in the __init__ documentation which is repeated in the class description. Need to find a way to make it all look consistent and nice.

CI improvements

list for possible CI improvements:

  • pygmo building and testing
  • expand CI on windows (32bit/MinGW builds?)
  • automatically upload documentation to gh-pages
  • manylinux builds and automated upload to PyPi (#82)
  • run make doctest in some CI? (maybe in the compile script)
  • reduce verbosity of travis logs

UDP and archipelago

I have noticed that when I try the archipelago class in a python user defined problem (UDP), pygmo needs somehow to connect to an iPython Cluster. Instead, with a predefined problem, everything runs smoothly in my box.
Is there anyway to avoid the configuration of an iPython Cluster?

This is the code I am using for this test:

import pygmo as pg

class sphere_function:
        def __init__(self, dim):
            self.dim = dim
   
        def fitness(self, x):
            return [sum(x*x)]
   
        def get_bounds(self):
            return ([-1] * self.dim, [1] * self.dim)
   
        def get_name(self):
            return "Sphere Function"
   
        def get_extra_info(self):
            return "\tDimensions: " + str(self.dim)

prob = pg.problem(sphere_function(3))
algo = pg.algorithm(pg.bee_colony(gen = 20, limit = 20))
archi = pg.archipelago(n=1, algo=algo, prob=prob, pop_size=10)

Installation Error, boost libraries too old. How to point `ccmake` to the new libraries?

Problem:

I'm trying to install PyGMO on Linux Red Hat 5 following the instructions in the official site.

When running the ccmake ../ command I get an error telling me that my Boost libraries are too old:

 CMake Error at /usr/share/cmake/Modules/FindBoost.cmake:1111 (message):
   Unable to find the requested Boost libraries.

   Boost version: 1.41.0

   Boost include path: /usr/include

   Detected version of Boost is too old.  Requested version was 1.48 (or
   newer).
 Call Stack (most recent call first):
   CMakeLists.txt:99 (FIND_PACKAGE)



 CMake Error at CMakeLists.txt:116 (MESSAGE):
   I cannot find the keplerian_toolbox library or headers, please install it
   and make sure it can be found

I then installed Boost 1.48 using sudo yum install boost148-devel without problems. Now, my usr/lib64 directory has both copies of the 1.41 and 1.48 versions of the different lib files:

    ▶ ls -l | grep boost | head -n 20
    drwxr-xr-x  2 root root     4096 Feb 21 20:11 boost
    drwxr-xr-x  2 root root     4096 Feb 21 21:40 boost148
    -rwxr-xr-x  1 root root    32536 Apr 17  2015 libboost_chrono-mt.so.1.48.0
    -rwxr-xr-x  1 root root    32504 Apr 17  2015 libboost_chrono.so.1.48.0
    lrwxrwxrwx  1 root root       31 Feb 21 20:11 libboost_date_time-mt.so -> libboost_date_time-mt.so.1.41.0
    -rwxr-xr-x  1 root root    75664 Feb  8  2012 libboost_date_time-mt.so.1.41.0
    -rwxr-xr-x  1 root root    67168 Apr 17  2015 libboost_date_time-mt.so.1.48.0
    lrwxrwxrwx  1 root root       28 Feb 21 20:11 libboost_date_time.so -> libboost_date_time.so.1.41.0
    -rwxr-xr-x  1 root root    75560 Feb  8  2012 libboost_date_time.so.1.41.0
    -rwxr-xr-x  1 root root    67064 Apr 17  2015 libboost_date_time.so.1.48.0
    lrwxrwxrwx  1 root root       32 Feb 21 20:11 libboost_filesystem-mt.so -> libboost_filesystem-mt.so.1.41.0
    -rwxr-xr-x  1 root root    87624 Feb  8  2012 libboost_filesystem-mt.so.1.41.0
    -rwxr-xr-x  1 root root   124488 Apr 17  2015 libboost_filesystem-mt.so.1.48.0
    -rwxr-xr-x  1 root root    87616 Feb 18  2012 libboost_filesystem-mt.so.5
    lrwxrwxrwx  1 root root       29 Feb 21 20:11 libboost_filesystem.so -> libboost_filesystem.so.1.41.0
    -rwxr-xr-x  1 root root    87584 Feb  8  2012 libboost_filesystem.so.1.41.0
    -rwxr-xr-x  1 root root   124448 Apr 17  2015 libboost_filesystem.so.1.48.0
    -rwxr-xr-x  1 root root    87584 Feb 18  2012 libboost_filesystem.so.5
    lrwxrwxrwx  1 root root       27 Feb 21 20:11 libboost_graph-mt.so -> libboost_graph-mt.so.1.41.0
    -rwxr-xr-x  1 root root   171632 Feb  8  2012 libboost_graph-mt.so.1.41.0

But even now I still get the same error when running the ccmake command.

Question:

How can I tell PyGMO to look for the boost 1.48 libraries?

pygmo archipelago using simulated_annealing only improves the initial champion?

In using the simulated_annealing algorithm in an archipelago, I found that only the initial champion seems to be evolved and all the other islands stay the same.

Consider the following test case:

archi=pg.archipelago(n=1,algo=pg.de(),pop_size=10,prob=pg.rosenbrock(10),seed=32)
archi[0].get_population().get_f()
array([[ 1119502.9304024 ], [ 1887060.79044021], [ 881859.7076029 ], [ 1103797.48649139], [ 758181.03305512], [ 1747402.51182214], [ 1030352.91487772], [ 416032.91483984], [ 1836179.29729704], [ 609598.37469839]])

archi.evolve(); archi.wait()
archi[0].get_population().get_f()
array([[ 216007.47204985], [ 1270458.58395363], [ 881859.7076029 ], [ 276188.16879124], [ 555552.67057929], [ 1265020.82538769], [ 384033.30916346], [ 360803.18304403], [ 1798444.20627363], [ 230837.65916308]])

But if I run the same using algo=pg.simulated_annealing(), then the evolved population_f after one call to archi.evolve() becomes,

array([[ 1.11950293e+06], [ 1.88706079e+06], [ 8.81859708e+05], [ 1.10379749e+06], [ 7.58181033e+05], [ 1.74740251e+06], [ 1.03035291e+06], [ 6.23189001e+00], [ 1.83617930e+06], [ 6.09598375e+05]])

Notice that only the island with the smallest fitness at the start has been evolved. But most of the fitness vectors change using pg.de(). This happens with different problems, different algorithms, and different numbers of islands. Seems like a bug?

So far the workaround I've found is to call
archi = pg.archipelago(n = 10, algo = pg.simulated_annealing(), prob = pg.rosenbrock(10), pop_size = 1, seed = 32) instead. However, if I do this using pg.de() as the algo, I receive "error occurred" for each thread. Very odd, all of it...

Archipelagos and the mp_island namespace

Hi,
I'm attempting to parallelize some maximization problems I'm working on with pygmo, and I'm running into a bit of an issue. I'm on Win64, Python 3.6.1, Pygmo 2.2. The simplest example I can get to fail that replicated my problem is below.

import pygmo as pg
import numpy as np


class toy_problem:

     def fitness(self, x):
         return [np.sum(np.sin((x-.2)**2))]

     def get_bounds(self):
         return (np.array([-1] * 3), np.array([1] * 3))


if __name__=='__main__':
    algo = pg.algorithm(pg.de(gen=1000, seed=126))
    prob = pg.problem(toy_problem())
    pop = pg.population(prob=prob, size=10)
    print(pop.champion_f)
    pop = algo.evolve(pop)
    print(pop.champion_f)
    # fine up to this point

    archi = pg.archipelago(n=6, algo=algo, prob=prob, pop_size=70)
    archi.evolve()
    archi.wait_check()

This fails at wait_check() with the message

Traceback (most recent call last):
  File "C:\Python36\lib\site-packages\pygmo\_py_islands.py", line 128, in run_evolve
    return res.get()
  File "C:\Python36\lib\multiprocessing\pool.py", line 608, in get
    raise self._value
NameError: name 'np' is not defined

Apparently when mp_island creates a pool object, the imports from __main__ never get imported by the children, not entirely sure why. I can fix this by either 1. not using numpy, or 2. changing fitness to

 def fitness(self, x):
    import numpy as np
    return [np.sum(np.sin((x-.2)**2))]

but obviously this is not a long-term solution, and this problem prevents me from even calling other functions in __main__ as well as importing any outside modules. Any help is appreciated, thanks.

'archipelago' doesn't parallelize to different cores

Hi,

I've tried to make use of archipelago in order to parallelize the problem solution. I first tried the toy example mentioned in:
https://esa.github.io/pagmo2/docs/python/tutorials/using_archipelago.html
Indeed I can see that all the cores are utilized.
However, when I made my own problem (it reads data from a file, and learn discretization for it), I no longer see any improvement in the code:

import pygmo as pg
import numpy as np
import matplotlib.pyplot as plt
from numba import jit, float64
import pickle
class auto_quantizer:
    def __init__(self,dim, min_value, max_value, actual_data):
        self.dim = dim
        self.min_value = min_value
        self.max_value = max_value
        self.actual_data = actual_data
    def fitness(self, x):
        return [self.determine_nearest_value(x)]

    def get_bounds(self):
        return (np.full((self.dim,),self.min_value),np.full((self.dim,),self.max_value))

    def determine_nearest_value(self, x):
        total_loss = 0
        for data_point in self.actual_data:
            total_loss += np.min(np.abs(x - data_point))
        return total_loss
if __name__ == '__main__':
    actual_data = pickle.load(open('speed.pkl', 'rb'))[0]
    actual_data = np.array(actual_data)
    print ("actual_data: ", actual_data.shape)
    min_value = np.min(actual_data)
    max_value = np.max(actual_data)
    nb_levels = 8
    uni_levels = np.linspace(min_value, max_value, nb_levels)
    prob = pg.problem(auto_quantizer(nb_levels, min_value, max_value, actual_data))
    algo = pg.algorithm(pg.sea(gen = 2000))
    archi = pg.archipelago(32,algo=algo, prob=prob, pop_size=200)
    print(archi)
    archi.evolve()
    archi.wait()
    print (archi.get_champions_f())

I can see only one core is being utilized.
Any thoughts on why?

NLopt improvements

#67 introduces a first iteration of the NLopt wrappers in pagmo. Random dump of possible improvements:

  • implement support for the augmented Lagrangian algorithms NLOPT_AUGLAG and NLOPT_AUGLAG_EQ (#75)
  • in addition to being able to set up the algo's parameters after construction, we should probably provide a kwargs ctor in python to set up the parameters upon construction (for consistency with other algos)
  • implement missing algorithm configuration options, such as nlopt_set_xtol_abs() (absolute tolerance on parameters with a vector of tolerances rather than a single tolerance value valid for all components), nlopt_set_initial_step() (initial step size for derivative-free algorithms), nlopt_set_vector_storage() (vector storage for limited-memory quasi-Newton algorithms)
  • add support for the global optimisation algorithms (this essentially just requires to add a handful of config options which are not yet exposed, because they are meaningful only for the global opt algos)
  • add support for hessians preconditioning (still experimental in NLopt)
  • implement a cache for avoiding repeated calls to problem::fitness(). pagmo computes objfun and contraints in a single call to problem::fitness(), but NLopt (and, presumably, other local optimisation libraries) separate the computation of objfun and constraints in different functions. This means that our local optimisation wrappers might end up calling fitness() repeatedly with the same decision vector. The idea is then to code a cache that remembers the result of the last N calls to fitness() (and maybe gradient() as well?), in order to avoid wasting cpu cycles.

See http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference

Interruptible islands

In some cases it might be desirable to force the interruption of an evolution (e.g., a buggy algorithm entered an endless loop, a stopping criterion was misconfigured and now the optimisation will run until the heat death of the universe, etc.).

It seems unlikely that we can interrupt threads in a safe fashion, so interruption of thread_islands seems to be off the table. For other islands, however, it might be possible to force the termination of the underlying external process. This should be the case for mp_island, for instance, since Python's multiprocessing pool class does have a terminate() method that seems to do what we need:

https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.terminate

It remains to be seen if/how we can cope with a process being killed off in the island machinery: will the async result get() method hang? Is Python able to clean up everything properly? Need to investigate.

Archipelago API improvement

  • In the archi __repr__ show, for each line, if an exception has occured (add a column, or a tick)
  • Rename the archi method get() into wait_check()

Champion for stochastic optimization problems?

Currently, a population of a stochastic optimization problem can be queried for its champion. This will give you a chromosome (not necessarily currently in the population) and a fitness value (usually computed at a previous state of the internal PRG). It is not possible to retrieve this previous state/seed, so it is unclear how this fitness came into existence. Moreover, it is debatable whether the notion of a champion even makes sense for a stochastic optimization problem? (i.e. the champion could be the worst individual according to the current state of the PRG).

DTLZ improvements

There is currently a switch statement in the DTLZL-problem that triggers certain objective functions to be executed on the decision vector. A potential performance improvement would be to substitute this call with function pointers or templates.

    switch (m_prob_id) {
        case 1:
            retval = f1_objfun_impl(x);
            break;
        case 2:
        case 3:
            retval = f23_objfun_impl(x);
            break;
        case 4:
            retval = f4_objfun_impl(x);
            break;
        case 5:
        case 6:
            retval = f56_objfun_impl(x);
            break;
        case 7:
            retval = f7_objfun_impl(x);
            break;
    }

Ipopt improvements

  • enable ipopt in the pip packages (MinGW and manylinux builds). We will have to compile ipopt by hand for this.

Uniform #include style

At the moment we have a double style for the inclusion of pagmo headers within pagmo:

#include <pagmo/algorithm.hpp>

vs

#include "algorithm.hpp"

I would prefer if we switch eventually to the first style, which I like better because that is the same style someone using pagmo as C++ library would use, making things more consistent overall.

Python docs improvements

Keep track of useful improvements to the Python documentation:

  • document special members such as __len__, __getitem__, __copy__, __deepcopy__ that are implemented in some classes (e.g., island, archipelago, population, etc.). This can be done by adding the sphinx directive :special-members: to the autoclass directive, but then this brings in the __init__ documentation which is repeated in the class description. Need to find a way to make it all look consistent and nice.

Improve support to integers

At the current status integer programming support is limited. The integer part of the decision vector can be passed as an argument to algorithms and the evolve() takes care to apply different operators on the float and the integer part.

This way of doing things has obvious drawbacks and a better model would be to define an integer part of the decision vector in the problem.

  • Introduce in the UDP interface a get_nix(), that similarly to get_nic(), get_nec(), defines the integer part of the decision vector. (check that get_nix() <= get_nx() and that the bounds corresponding to the integer components are indeed integers)
  • Modify the population constructor so that random decision vectors will have integers in their integer part (maybe modify the pagmo::random_decision_vector utility)
  • Update the nsga2 and sga algorithms removing the m_int_dim member and using the get_nix() from the pagmo::problem instead
  • Update the algo list with the int information.

Meta-problems do not forward the integer dimension

The following example reveals how meta-problems do not forward the integer dimension to the returned UDP.

In [16]: class my_prob:                
    ...:                           
    ...:     def fitness(self, x):
    ...:         return [1,2,3,4,5,6]
    ...:     
    ...:     def get_nec(self):
    ...:         # Number of equality constraints
    ...:         return 1
    ...: 
    ...:     def get_nic(self):
    ...:         # Number of inequality constraints
    ...:         return 2
    ...: 
    ...:     def get_nobj(self):
    ...:         # Number of objectives
    ...:         return 3
    ...: 
    ...:     def get_nix(self):
    ...:         # Number of integer variables
    ...:         return 1
    ...: 
    ...:     def get_bounds(self):
    ...:         # Bounds of the individual
    ...:         return ([1,2], [3,4])
    ...:     
prob =  pygmo.problem(pygmo.unconstrain(moo_cost_res()))

One would expect the resulting integer dimension to be 1. But its not :( same applies to translate and decompose

Tutorials/examples for multi-objective custom UDPs

Hi everybody,

I have read the complete python tutorials and have successfully tested my pagmo2/pygmo installation with some simple mono-objective examples.

Since I intend to use it for multi-objective optimization, I have been looking for some examples on how to do this. In the mono-objective UDP example tutorial is says that the number of objectives can be defined similarly to the number of constraints (eqs/iqs) but " Since we do not define, in this case, any other method pygmo will assume a single objective, no constraints, no gradients etc…" and in the MOO tutorial a pre-defined standard problem is chosen.

Are there any tutorials or code examples that show how to set up multi-objective customized UDPs such as in the simple UDP tutorial?

Your packages seems really promising but I got stuck here..

Thanks in advance!
Cord

Constrained multi-objective optimization

Hello,
first of all let me thank you for the work you put into this project and for making it available to the world.
I am tryng to solve a constrained multi-objective problem, but i don't seem to find a way to implement constraints in this last version. I read the documentation of PyGM0 1.xx and there is a section with tutorials on constraint handling. Do those methods apply for the 2.5 version as well? Or else can you provide some pointers to help me do this?
Thank you in advance for your support.

Regards,
Simone.

How to optimise a NN using DE?

I am trying to optimise a NN using DE using pagmo2 package?

I defined my fitness function as :

def fitness (params,*arg):
    X=arg[0][0]
    y=arg[0][1]
    #print(arg[0][1])
    """Forward propagation as objective function

    This computes for the forward propagation of the neural network, as
    well as the loss. It receives a set of parameters that must be
    rolled-back into the corresponding weights and biases.

    Inputs
    ------
    params: np.ndarray
        The dimensions should include an unrolled version of the
        weights and biases.

    Returns
    -------
    float
        The computed negative log-likelihood loss given the parameters
    """
    # Neural network architecture
    #print(a)
    n_inputs = 4
    n_hidden = 20
    n_classes = 3

    # Roll-back the weights and biases
    W1 = params[0:80].reshape((n_inputs,n_hidden))
    b1 = params[80:100].reshape((n_hidden,))
    W2 = params[100:160].reshape((n_hidden,n_classes))
    b2 = params[160:163].reshape((n_classes,))

    # Perform forward propagation
    z1 = X.dot(W1) + b1  # Pre-activation in Layer 1
    a1 = np.tanh(z1)     # Activation in Layer 1
    z2 = a1.dot(W2) + b2 # Pre-activation in Layer 2
    logits = z2          # Logits for Layer 2

    # Compute for the softmax of the logits
    exp_scores = np.exp(logits)
    probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)

    # Compute for the negative log likelihood
    #N = 150 # Number of samples
    N=(X.shape)[0]
    #print(X.shape)[0]
    corect_logprobs = -np.log(probs[range(N), y])
    loss = np.sum(corect_logprobs) / N

    return loss

I want to optimise it with DE so I am passing arguments as

from sklearn.datasets import load_iris
data = load_iris()
X = data.data
y = data.target

algo = algorithm(de(gen = 1000))
algo.set_verbosity(100)
prob=fitness(np.random.randn(163),args)
pop = population(prob, 20)
pop = algo.evolve(pop) 

But I am getting errors! Please help.

Tracking information about evolving populations

Hi again,

I have alread looked into the documentation but could not find anything concerning my question.

My goal is to obtain information about evolving population like in this example:

import pygmo as pg
import numpy as np

class my_udp:
    def fitness(self, x):
        f1 = 1-np.exp(-((x[0]-1/np.sqrt(1))**2+(x[1]-1/np.sqrt(2))**2))
        f2 = 1-np.exp(-((x[0]+1/np.sqrt(1))**2+(x[1]+1/np.sqrt(2))**2))
        return [f1, f2]

    def get_nobj(self):
        return 2

    def get_bounds(self):
        return ([-4]*2, [4]*2)

pro = pg.problem(my_udp())

pop = pg.population(pro, size=8)

algo = pg.algorithm(pg.nsga2(gen=10))

# "manually" tracking vectors/fitnesses of the evolving population
for i in range(10):
    pop = algo.evolve(pop)
    fits, vectors = pop.get_f(), pop.get_x()
    print('Generation ', i, ':', fits)

# is it possible to extract information about the evolving population
# without the "manual" solution subsequently?
#algo.set_verbosity(100)
pop = algo.evolve(pop)
fits_end, vectors_end = pop.get_f(), pop.get_x()
print('After 10 generations:', fits_end)

So it works already with my manual solution e.g. by saving the generational data in some data structure. But I am not sure if there is an easier alternative using some method e.g. on the population object.

What would be the easiest way to keep track of evolving populations?

And why does a direct call of pop = algo.evolve(pop) always deliver exactly n individuals as determined in the beginning? Isn't there also something like a "Hall of Fame" for good individuals from former generations so that the resulting pareto-front is bigger than n? Or is this a result of the parallel design of pagmo?

Cheers
Cord

Pagmo Installation test code

Hello,

I am trying to run the quickstart test code for pagmo2. First where should the test cpp file be located ? I am getting errors that it cannot find "pagmo/pagmo.hpp" and "pagmo/config.hpp" files.

SB

move semantics not rigorously thought-out

Hi,
the following suggestions for pagmo:

  1. Most constructors (e.g. pagmo::problem, pagmo::population etc.) use move semantics && to ingest its arguments. This leaves the references outside invalid (clearly), as the resources are swapped. E.g. pagmo::problem should then not give the user the possibility to still access the leftover problem, e.g. by turning it into a null_problem.
  2. When passing e.g. pagmo::pso to pagmo::algorithm, gen is ignored in evolve.
  3. m_fevals is not incremented in algorithm::evolve, as that function takes population by value, not by reference.

Generally your usage of move semantics and references is not very thorough, so it would be worth spending some effort making the api more consistent.

Best,
Johannes

Malloc error running pykep2+pygmo2 example?

Hello

I hope this is the right place to submit this problem. I'm running Pykep 2.1 and Pygmo 2.6 on my MacBook Pro - installation was via Anaconda Navigator 1.8.2, and I'm running OSX 10.13.3.

I'm managing to code some simple Pykep and Pygmo problems which seem to be working fine, but when running Pykep example 5 (Global optimization of a multiple gravity assist trajectory with one deep space manouvre per leg), I'm hitting a malloc problem that is generated after the call to the archipelago algorithm. I've copied the full output from my session below - the first four lines show my Python version and associated build information. Note that I am running the framework version of Python, (pythonw) as required in order to have a functioning matplotlib in the OSX installation.

The session hangs after the final error message is issued. There are 8 malloc errors, one per value of n in the call to pg.archipelago (I've checked, and reducing n reduces the number of malloc errors).

I'm not sure what other information might be useful to help identify the problem, but any suggestions would be gratefully received!

Thanks

Nigel


(astro) npb-lap2:Astrodynamics nigelbannister$ pythonw
Python 3.6.4 | packaged by conda-forge | (default, Dec 23 2017, 16:54:01)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

import pykep as pk
pk.examples.run_example5()
Running a Self-Adaptive Differential Evolution Algorithm .... on 8 parallel islands
python(57434,0x7fff9e705340) malloc: *** error for object 0x7ffee6639cf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57436,0x7fff9e705340) malloc: *** error for object 0x7ffee15c7cf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57432,0x7fff9e705340) malloc: *** error for object 0x7ffee14a1cf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57431,0x7fff9e705340) malloc: *** error for object 0x7ffee1f16cf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57429,0x7fff9e705340) malloc: *** error for object 0x7ffeecdb7cf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57435,0x7fff9e705340) malloc: *** error for object 0x7ffee3dabcf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57430,0x7fff9e705340) malloc: *** error for object 0x7ffeef5aacf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
python(57433,0x7fff9e705340) malloc: *** error for object 0x7ffee2e7acf0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug

Move from inheritance to encapsulation for the metas

After a discussion on gitter, it appears it may be more appropriate for meta algos/probs to encapsulate a problem/algorithm instance, instead of inheriting. The main advantages:

  • instead of blacklisting methods we don't want to inherit, we whitelist (i.e, re-implement with one-line wrapper) methods we want. This seems a superior approach in our situation, as we are blacklisting lots of stuff;
  • we remove the cognitive confusion of an inheritance relation that ultimately has no other purpose than sharing common code;
  • we may be able to reduce the complexity of the Python bindings (but this part needs to be clarified better).

Disadvantages:

  • if we add/modify functionality in problem/algorithm we need to iterate over all metas to adapt them (?).

Noise Metaproblem

Add a new metaproblem "noise" which allows to turn a UDP into a stochastic version by adding different flavors of noise to the objective function.

Topology in pagmo2

Hello,

It seems that the topology classes from pagmo are not present it pagmo2. Is there a good way to achieve similar migration between islands in pagmo2 (pygmo, specifically) as it currently exists? If not, will topology functionality be implemented in the future?

Thank you!

Steve

Add a check to hessian sparsity?

The following code in the UDP

    std::vector<sparsity_pattern> hessians_sparsity() const {
        return {
            {{0,0}, {1,1}},
            {{0,0}, {2,0}, {2,2}},
            {{}},
            {{}},
            {{}},
        };
    }

does not create exception upon pagmo::problem construction but maybe it should! The Hessian is not checked for entries such as {{}} which clearly the user may use to intend an empty hessian. The correct syntax for pagmo in that case would be {}, otherwise pagmo will expect a size 1 hessian to be returned by hessians so that, for example, the following user provided function would be incorrect:

    std::vector<vector_double> hessians(const vector_double &dv) const {
        return {
            {2, 4},
            {2, 2, 1},
            {},
            {},
            {}
        };
    }

and produce an error message when and if an algorithm would call hessisans:

where: /home/dario/.local/include/pagmo/problem.hpp, 2246
what: On the hessian no. 2: Components returned: 0, should be 1

Multiobjective UDP

Hi,
Thank you so much for the superb work you've done!
Can you please provide an example for a multiobjective UDP? I tried simply to add the other objectives to the return list from fitness function, but it always complains that this is s single-objective function, and thus i should return a list of length 1
Best Regards,
Omar

Archipelago fails when not in a if __name__ == "__main__" block

I am trying to apply archipelago to a problem of mine, but it seems it is not possible to do with any user defined problem, except for the prepackaged ones on https://esa.github.io/pagmo2/docs/python/problems/py_problems.html.
When running the code below (and my own problem, and other problems not in the above list), I get the errors shown in ArchipelagoError.txt and I am not able to solve them myselves.
Could this be due to the "thread safety" of my problems? I have not been able to change them from "None" to "Basic".

import pygmo as pg

class sphere_function:
        def __init__(self, dim):
            self.dim = dim
   
        def fitness(self, x):
            return [sum(x*x)]
   
        def get_bounds(self):
            return ([-1] * self.dim, [1] * self.dim)
   
        def get_name(self):
            return "Sphere Function"
   
        def get_extra_info(self):
            return "\tDimensions: " + str(self.dim)

prob = pg.problem(sphere_function(3))
algo = pg.algorithm(pg.bee_colony(gen = 20, limit = 20))
archi = pg.archipelago(n=1, algo=algo, prob=prob, pop_size=10)

Serialization error (deepcopy) with object in UDP

Hi everybody,

I am trying to use an object from another package within the UDP class but ran into some serialization issues.

Basically, my problem is implemented as follows:

class SomeParentClass():

    def __init__(self, my_fancy_object):
        self.my_fancy_object = my_fancy_object  #object that cannot be serialized

    def change_something(self, x):
        # apply a function to self.my_fancy_object to change its structure 
        return True

    def calculate_something(self):
        # apply an algorithm to self.my_fancy_object 
        return True

    def calculate_f1(self):
        # calculate first objective value from changed self.my_fancy_object
        return f1

    def calculate_f2(self):
        # calculate second objective value from changed self.my_fancy_object
        return f2


class MyUDP(SomeParentClass):

    def fitness(self, x):

        self.change_something(x)
        self.calculate_something()

        f1 = self.calculate_f1()
        f2 = self.calculate_f2()

        return [f1, f2]

    def get_nobj(self):
        return 2

    def get_bounds(self):
        return ([-4]*2, [4]*2)

The problem is that in this implementation it throws a serialization error Uncopyable field encountered when deep copying outside... which stems from copying my_fancy_object being a pyomo model.

The weird thing is that deepcopying my_fancy_object manually (with copy.deepcopy) works and it only throws an error when I implement my problem like described above.

Which brings me to my questions:

How can I solve this serialization error? Are there ways to implement what I want without deepcopying the object? Or are there any other ways to fix this including a deepcopy of the object?

Thanks in advance!
Cord

WFG multi-objective benchmark problems

The ZDT and DTLZ problems are the most frequently used benchmarks in the literature of MOEA, however they are outdated, partially too easy for modern algorithms, scale poorly (if at all) and have other inherent flaws. The WFG test problems provide a better benchmark as they have more diverse feature landscape and try to address the weaknesses of ZDT and DTLZ. An extensive review on this issue was done by:

Simon Huband, Philip Hingston, Luigi Barone and Lyndon While: "A Review of Multi-objective Test Problems and a Scalable Test Problem Toolkit"

Implementing the WFG benchmark as problems for pagmo is needed.

Wrapping more solvers

A dump of solvers/algorithms that might be useful to have in pagmo:

A couple of LBFGS implementations:

Is SNOPT available?

I am unable to find SNOPT in pagmo2. I can find it in pagmo1. Was it removed? I work on non-linear constrained optimization (sparse) for robot motion planning. This package looks to fit my use. Thanks!

De-atomicize the problem counters

The problem class contains objective function/gradient/hessians evaluation counters implemented via std::atomic. The original rationale for this choice was to allow safe multithreaded usage of the problem class. However, it turned out that in many cases UDPs (and by extension, pagmo::problem) cannot be used safely concurrently. We eventually renounced the idea of striving for safe concurrent usage of the problem class, and as a result we should turn the counters into plain integrals. This will simplify the code and improve performance.

Additionally, it looks like there are alignment bugs in the MSVC compiler that prevent usage of std::atomic as class members in 32bit builds:

https://ci.appveyor.com/project/conda-forge/staged-recipes/build/1.0.12952/job/jwc593r6kic4yyah
http://stackoverflow.com/questions/21743144/using-stdatomic-with-aligned-classes

problem creation, related question

in case of looking for an integer in a predefined lower/higher bounds, how to define the variable vector X ? (m not looking for a decimal numbers but integers instead)

pygmo parallel evauation of `problem.fitness` for algorithm.

I have a problem where I call a remote server to asynchronously evaluate problem.fitness since it is a fairly expensive task. This means that I should be able to evaluate the entire population in parallel within a single process. Is this possible in pygmo? Or is there a recommended way to do this. I know that islands implement parallel processing but that only parallelizes multiple algorithm.evolve(population) calls.

From what I have been reading all program and algorithm instances return pygmo.thread_safety.none which makes this a no go. Is this true?

Each call to problem.fitness takes about 20 ms to evaluate. So I would like to evaluate the population concurrently so I can increase the number of evaluations / second.

Expose in pygmo various utility functions

  • It would be nice if all the utility functions available on the C++ were available in Python as well. E.g., all the constrained optimization sorting utils are not exposed in Pygmo.
  • Once this is done, the docstrings of population in pygmo should be modified to refer to the exposed sorting utils, rather than to the C++ ones.
  • We should probably export also the bits from rng.hpp to offer seed control to the Python users.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.