Giter VIP home page Giter VIP logo

evolutionary-intelligence / pypop Goto Github PK

View Code? Open in Web Editor NEW
188.0 188.0 26.0 974.94 MB

PyPop7: A Pure-Python Library for POPulation-based Black-Box Optimization (BBO), especially their *Large-Scale* versions/variants (evolutionary algorithms/swarm-based optimizers/pattern search/...). [https://pypop.rtfd.io/]

Home Page: https://pypop.readthedocs.io/

License: GNU General Public License v3.0

Python 100.00%
black-box-optimization continuous-optimization derivative-free-optimization differential-evolution direct-search estimation-of-distribution-algorithms evolution-strategies evolutionary-algorithms genetic-algorithms global-optimization gradient-free-optimization large-scale-optimization metaheuristics nonlinear-optimization numerical-optimization optimization-algorithms particle-swarm-optimization population-based-optimization random-search zeroth-order-optimization

pypop's Introduction

PyPop7: a Pure-PYthon open-source library of POPulation-based (evolution / swarm / pattern search) black-box OPtimization

GNU General Public License v3.0 PyPI for PyPop7 Documentation Status Coverage arxiv Downloads Downloads WeChat-Group

PyPop7 is a Pure-PYthon open-source library of POPulation-based OPtimization for single-objective, real-parameter, black-box problems (currently actively maintained). Its goal is to provide a unified interface and a set of elegant algorithmic implementations (e.g., evolutionary algorithms, swarm-based optimizers, pattern search, etc.) for Black-Box Optimization (BBO), particularly population-based optimizers, in order to facilitate research repeatability, benchmarking of BBO, and especially real-world applications.

More specifically, for alleviating their curse of dimensionality, the focus of PyPop7 is to cover their State Of The Art for Large-Scale Optimization (LSO), though many of their small/medium-scaled versions and variants are also included here (mainly for theoretical or benchmarking or educational purposes). For a list of public use cases of PyPop7, please refer to this online document for details. Although we have chosen GPL-3.0 license, anyone could use, modify, and improve this open-source Python library entirely freely for any (no matter open-source or closed-source) purpose.

How to Quickly Use

The following three steps are enough to utilize the black-box optimization power of this library PyPop7:

  1. Use pip to install pypop7 on the Python3-based virtual environment via venv or conda:
$ pip install pypop7
  1. Define the objective/cost/fitness function to be minimized for the optimization problem at hand,
import numpy as np  # for numerical computation, which is also the *computing engine* of pypop7

# the below example is Rosenbrock, one notorious test function from the optimization community
def rosenbrock(x):
    return 100.0*np.sum(np.square(x[1:] - np.square(x[:-1]))) + np.sum(np.square(x[:-1] - 1.0))

# define the fitness (cost) function to be minimized and also its settings
ndim_problem = 1000
problem = {'fitness_function': rosenbrock,  # cost function
           'ndim_problem': ndim_problem,  # dimension
           'lower_boundary': -5.0*np.ones((ndim_problem,)),  # lower search boundary
           'upper_boundary': 5.0*np.ones((ndim_problem,))}  # lower search boundary

Note that without loss of generality, only the minimization process is considered in this library, since maximization can be easily transferred to minimization by negating it.

  1. Run one or more black-box optimizers on this optimization problem:
# here we choose LM-MA-ES owing to its low complexity and metric-learning ability for LSO:
#     https://pypop.readthedocs.io/en/latest/es/lmmaes.html
from pypop7.optimizers.es.lmmaes import LMMAES
# define all the necessary algorithm options (which may differ among different optimizers)
options = {'fitness_threshold': 1e-10,  # terminate when the best-so-far fitness is lower than this threshold
           'max_runtime': 3600,  # 1 hours (terminate when the actual runtime exceeds it)
           'seed_rng': 0,  # seed of random number generation (which must be explicitly set for repeatability)
           'x': 4.0*np.ones((ndim_problem,)),  # initial mean of search (mutation/sampling) distribution
           'sigma': 3.0,  # initial global step-size of search distribution (not necessarily optimal)
           'verbose': 500}
lmmaes = LMMAES(problem, options)  # initialize the optimizer
results = lmmaes.optimize()  # run its (time-consuming) search process
print(results)

Note that for PyPop7, the number 7 is added just because pypop has been registered by other in PyPI. The icon butterfly for PyPop7 is used to respect/allude to the book (a complete variorum edition) of Fisher, "the greatest of Darwin's successors": The Genetical Theory of Natural Selection (butterflies in its cover), which inspired the proposal of Genetic Algorithms (GA). Please refer to https://pypop.rtfd.io/ for the online documentation of this seemingly well-designed (self-boasted) library (several praises from others).

A (Still Growing) Number of Black-Box Optimizers (BBO)

For new/missed BBO, we provide a unified API to freely add them if they satisfy our design philosophy (see development-guide for details). Note that Ant Colony Optimization (ACO) and Tabu Search (TS) are not covered in this open-source Python library, since they work well mainly in discrete/combinatorial search spaces in many cases. Furthermore, brute-force search (exhaustive/grid search) is also excluded here, since it works only for very low (typically << 10) dimensions. In the future version, we will consider adding Simultaneous Perturbation Stochastic Approximation (SPSA) into this open-source Python library.


  • lso: indicates the specific BBO version for LSO (dimension >> 1000).
  • c: indicates the competitive (or de facto) BBO version for small/medium-dimensional problems (though it may work well under certain LSO circumstances).
  • b: indicates the baseline BBO version mainly for theoretical/educational interest, owing to its simplicity (relative ease to mathematical analysis).

Note that this classification based on only the dimension of objective function is just a rough estimation for algorithm selection. In practice, perhaps the simplest way to algorithm selection is trial-and-error or to try more advanced Automated Algorithm Selection techniques.


Computational Efficiency

For large-scale optimization (LSO), computational efficiency is an indispensable performance criterion of BBO/DFO/ZOO in the post-Moore era. To obtain high-performance computation as much as possible, NumPy is heavily used in this library as the base of numerical computation along with SciPy and scikit-learn. Sometimes Numba is also utilized, in order to further accelerate the wall-clock time.

Folder Structure

The main folder structure of this open-source library PyPop7 is presented below:

  • .circleci: for automatic testing based on pytest.
    • config.yml: configuration file in CircleCI.
  • .github: all configuration files for GitHub.
  • docs: for online documentations.
  • pypop7: all Python source code of BBO.
  • tutorials: a set of tutorials.
  • .gitignore: for GitHub.
  • .readthedocs.yaml: for readthedocs.
  • CODE_OF_CONDUCT.md: code of conduct.
  • LICENSE: open-source license.
  • README.md: basic information of this library.
  • coverage-badge.svg: coverage rate of testing, calculated via Coverage.py and generated via https://smarie.github.io/python-genbadge/.
  • pyproject.toml: for PyPI.
  • requirements.txt: for development.
  • setup.cfg: for PyPI (used via pyproject.toml).

References

For each optimization algorithm family, we are providing several representative applications published on some (rather all) top-tier journals/conferences (such as, Nature, Science, PNAS, PRL, JACS, JACM, PIEEE, JMLR, ICML, NeurIPS, ICLR, CVPR, ICCV, RSS, just to name a few), reported in the paper list called DistributedEvolutionaryComputation.

Sponsor

From 2021 to 2023, this open-source pure-Python library PyPop7 was supported by Shenzhen Fundamental Research Program under Grant No. JCYJ20200109141235597 (2,000,000 Yuan). Now it is supported by Guangdong Basic and Applied Basic Research Foundation under Grants No. 2024A1515012241 and 2021A1515110024. Furthermore, Qiqi Duan, one of its core developers, is also seeking new possible sponsors from enterprises.

Citation

If this open-source pure-Python library PyPop7 is used in your paper or project, it is highly welcomed but NOT mandatory to cite the following arXiv preprint paper: Duan, Q., Zhou, G., Shao, C., Wang, Z., Feng, M., Huang, Y., Tan, Y., Yang, Y., Zhao, Q. and Shi, Y., 2024. PyPop7: A pure-Python library for population-based black-box optimization. arXiv preprint arXiv:2212.05652. (now submitted to JMLR, under review).

Star History

visitors

PyPop7-Star-Data

pypop's People

Contributors

524130120 avatar chang-shao avatar chocooni avatar evolutionary-intelligence avatar fmyzckj avatar guochenzhou avatar hezonghan avatar hust1booze avatar iliyafiks avatar tarokingcn avatar ttt-noora avatar youngea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pypop's Issues

Tips to improve the results

I am trying to solve the optimization problem, I have selected 12 algorithms [VKDCMA, VDCMA, R1ES, RMES, CCMAES2016, FMAES, HCC, LMCMA, LMCMAES, OPOA2015, SAMAES, XNES], run them with 100 different seeds, but I realize that they are not doing well. The most effective one is "multi-start local optimization" (when I many times randomly choose the initial point for a local-search optimization, like BFGS). Maybe I'm doing something wrong?
Below I will only provide a "sandbox" that only runs FMAES. Sorry for numba (but, it's 10 times faster than numpy) and tdqm.

import math
import warnings

import numpy
import scipy
from numba import njit
from pypop7.optimizers.es.fmaes import FMAES
from scipy import optimize
from tqdm import tqdm

warnings.simplefilter(action='ignore', category=FutureWarning)


@njit(fastmath=True)
def power_function(x, k):
    N0 = len(x) // 3
    N_sphere = N0 // 2

    coords = numpy.zeros((N0, 3), dtype=float)
    ampl = numpy.zeros(N0, dtype=float)
    R_sources = numpy.zeros(N0, dtype=float)

    for i in range(N0):
        r = 1 if i < N_sphere else 1.1
        R_sources[i] = r
        ampl[i] = x[2 * N0 + i]

        theta = x[i]
        phi = x[N0 + i]

        sin_theta = math.sin(theta)

        coords[i, 0] = r * sin_theta * numpy.cos(phi)
        coords[i, 1] = r * numpy.sin(theta) * numpy.sin(phi)
        coords[i, 2] = r * numpy.cos(theta)

    distances = numpy.zeros((N0, N0), dtype=float)
    for i in range(N0):
        for j in range(i + 1, N0):
            d = math.sqrt(
                (coords[i, 0] - coords[j, 0]) ** 2 + (coords[i, 1] - coords[j, 1]) ** 2 + (
                            coords[i, 2] - coords[j, 2]) ** 2
            )
            distances[i, j] = distances[j, i] = d

    power = 1.0

    for i in range(N0):
        power += 2 * ampl[i] * numpy.sinc(k * R_sources[i] / numpy.pi)

    for i in range(N0):
        for j in range(N0):
            power += ampl[i] * ampl[j] * numpy.sinc(k * distances[i, j] / numpy.pi)

    return power


def multi_start_optimization(func, bounds, n_starts=10, method='BFGS', callback=None, seed=None, verbose=True,
                             **kwargs):
    best_result = None
    numpy.random.seed(seed)

    for _ in (pbar := tqdm(range(n_starts), disable=not verbose)):
        x0 = numpy.random.rand(*bounds[0].shape) * (bounds[1] - bounds[0]) + bounds[0]

        result = scipy.optimize.minimize(func, x0, method=method, callback=callback, **kwargs)

        if best_result is None or result.fun < best_result.fun:
            best_result = result
            pbar.set_postfix({'best_result_so_far': best_result.fun})

    return best_result

n_sources = 20

lower_boundary = numpy.array([0] * n_sources + [0] * n_sources + [-2] * n_sources)
upper_boundary = numpy.array([numpy.pi] * n_sources + [2 * numpy.pi] * n_sources + [2] * n_sources)
function_to_minimize = lambda x: power_function(x, k=7.5)

ms_result = multi_start_optimization(function_to_minimize, (lower_boundary, upper_boundary), n_starts=300, seed=23,
                                     tol=1e-4, verbose=True)

print(ms_result)

options = {
    'fitness_threshold': 1e-4,
    'max_runtime': 30_000,
    'seed_rng': 500,
    'max_function_evaluations': 100_000_000,
    'sigma': 0.3,
    'verbose': 500
}

problem = {'fitness_function': function_to_minimize,  # cost function
           'ndim_problem':  3 * n_sources,  # dimension
           'lower_boundary': lower_boundary,  # search boundary
           'upper_boundary': upper_boundary
          }

solver = FMAES(problem, options)  # initialize the optimizer
results = solver.optimize()

print(results['best_so_far_y'])

Output:

100%|██████████| 300/300 [03:18<00:00,  1.51it/s, best_result_so_far=0.162]
  message: Optimization terminated successfully.
  success: True
   status: 0
      fun: 0.16172215956324537
        x: [ 6.341e-01  1.739e+00 ... -4.215e-01 -2.126e-01]
      nit: 249
      jac: [-1.187e-05  3.090e-05 ...  2.599e-05  1.672e-05]
 hess_inv: [[ 3.054e+00 -8.579e-02 ...  2.756e-01  1.871e-01]
            [-8.579e-02  1.408e+00 ...  2.143e-02 -9.474e-02]
            ...
            [ 2.756e-01  2.143e-02 ...  5.659e-01  1.020e-01]
            [ 1.871e-01 -9.474e-02 ...  1.020e-01  6.574e-01]]
     nfev: 15799
     njev: 259
  * Generation 0: best_so_far_y 1.65875e+01, min(y) 1.65875e+01 & Evaluations 16
  * Generation 500: best_so_far_y 4.43191e-01, min(y) 4.44579e-01 & Evaluations 8016
  * Generation 1000: best_so_far_y 3.63186e-01, min(y) 3.63186e-01 & Evaluations 16016
  * Generation 1500: best_so_far_y 3.53268e-01, min(y) 3.53329e-01 & Evaluations 24016
  * Generation 2000: best_so_far_y 3.05547e-01, min(y) 3.05707e-01 & Evaluations 32016
  * Generation 2500: best_so_far_y 2.95432e-01, min(y) 2.95478e-01 & Evaluations 40016
  * Generation 3000: best_so_far_y 2.79785e-01, min(y) 2.79901e-01 & Evaluations 48016
  * Generation 3500: best_so_far_y 2.69350e-01, min(y) 2.69350e-01 & Evaluations 56016
  * Generation 4000: best_so_far_y 2.65173e-01, min(y) 2.65265e-01 & Evaluations 64016
  * Generation 4500: best_so_far_y 2.62260e-01, min(y) 2.62262e-01 & Evaluations 72016
  * Generation 5000: best_so_far_y 2.61340e-01, min(y) 2.61340e-01 & Evaluations 80016
  ...............................
....... *** restart *** .......
  * Generation 0: best_so_far_y 1.95928e-01, min(y) 9.80403e+00 & Evaluations 3197296
  * Generation 500: best_so_far_y 1.95928e-01, min(y) 2.09076e-01 & Evaluations 3709296
  * Generation 1000: best_so_far_y 1.84766e-01, min(y) 1.84766e-01 & Evaluations 4221296
  * Generation 1003: best_so_far_y 1.84766e-01, min(y) 1.84766e-01 & Evaluations 4223344
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.84766e-01, min(y) 1.28830e+01 & Evaluations 4225392
  * Generation 500: best_so_far_y 1.84766e-01, min(y) 4.66329e-01 & Evaluations 5249392
  * Generation 1000: best_so_far_y 1.72185e-01, min(y) 1.74094e-01 & Evaluations 6273392
  * Generation 1087: best_so_far_y 1.72185e-01, min(y) 1.72500e-01 & Evaluations 6449520
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 1.31143e+01 & Evaluations 6453616
  * Generation 500: best_so_far_y 1.72185e-01, min(y) 3.64424e-01 & Evaluations 8501616
  * Generation 780: best_so_far_y 1.72185e-01, min(y) 2.91337e-01 & Evaluations 9644400
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 1.11503e+01 & Evaluations 9652592
  * Generation 500: best_so_far_y 1.72185e-01, min(y) 3.37978e-01 & Evaluations 13748592
  * Generation 685: best_so_far_y 1.72185e-01, min(y) 2.82461e-01 & Evaluations 15255920
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 9.21559e+00 & Evaluations 15272304
  * Generation 500: best_so_far_y 1.72185e-01, min(y) 4.49968e-01 & Evaluations 23464304
  * Generation 717: best_so_far_y 1.72185e-01, min(y) 4.29491e-01 & Evaluations 27003248
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 1.20239e+01 & Evaluations 27036016
  * Generation 215: best_so_far_y 1.72185e-01, min(y) 6.52917e-01 & Evaluations 34048368
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 8.29719e+00 & Evaluations 34113904
  * Generation 500: best_so_far_y 1.72185e-01, min(y) 6.05617e-01 & Evaluations 66881904
  * Generation 802: best_so_far_y 1.72185e-01, min(y) 5.39836e-01 & Evaluations 86608240
 ....... *** restart *** .......
  * Generation 0: best_so_far_y 1.72185e-01, min(y) 1.50296e+01 & Evaluations 86739312
  * Generation 102: best_so_far_y 1.72185e-01, min(y) 6.62570e-01 & Evaluations 100000000

So, the result of multi-start optimization is 0.161725642456162, at the same time FMAES gives 0.172185 after 100 mlns attempts. Moreover, the last 94 mlns attempts did not improve the result. What can I do??? Changing sigma and individuals did not allow me to get results better than 0.1617.

Possible bug after restart of RMES

Hi there,

I think there may be a bug in the handling of the delay factor in RMES.py after a restart - but I could be mistaken. After achieving convergence of an optimisation run, the implementation restarted the run;

 ....... *** restart *** .......

Then after one generation, resulted in an error. See following code output after restart 1;

* Generation 1: best_so_far_y -8.09000e-01, min(y) 2.19336e+01 & Evaluations 9352
Traceback (most recent call last):
  File "code.py", line 878, in <module>
    results = lmmaes.optimize()
  File "Python\Python39\lib\site-packages\pypop7\optimizers\es\rmes.py", line 154, in optimize       
    mean, p, s, mp, t_hat = self._update_distribution(x, mean, p, s, mp, t_hat, y, y_bak)
  File "Python\Python39\lib\site-packages\pypop7\optimizers\es\rmes.py", line 124, in _update_distribution
    mean, p, s = R1ES._update_distribution(self, x, mean, p, s, y, y_bak)
  File "Python\Python39\lib\site-packages\pypop7\optimizers\es\r1es.py", line 144, in _update_distribution
    self.sigma *= np.exp(s/self.d_sigma)
numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('int32') with casting rule 'same_kind'

The inputs were as follows;

  'max_runtime': (17*60*60),  
  'seed_rng': 16, 
  'x': x,  
  'sigma': 1, 
  'verbose': 1,
  'saving_fitness': 100,
  'd_sigma': 10}

The problem had 100 dimensions. Any help would be greatly appreciated!

n_parents issue

In ES.init exists the bug with n_parents parameter.

The code:

import numpy
from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
from pypop7.optimizers.es.ccmaes2016 import CCMAES2016
problem = {'fitness_function': rosenbrock,  # define problem arguments
           'ndim_problem': 2,
           'lower_boundary': -5*numpy.ones((2,)),
           'upper_boundary': 5*numpy.ones((2,))}
options = {'max_function_evaluations': 5000,  # set optimizer options
           'seed_rng': 2022,
           'mean': 3*numpy.ones((2,)),
           'sigma': 0.1,
           'n_parents': 10}  # the global step-size may need to be tuned for better performance
ccmaes2016 = CCMAES2016(problem, options)  # initialize the optimizer class
results = ccmaes2016.optimize()  # run the optimization process

Fails with AttributeError: 'CCMAES2016' object has no attribute '_mu_eff'

This is because the _mu_eff initialization never reached:

if self.n_parents is None: # number of parents (μ: mu), parental population size
self.n_parents = int(self.n_individuals/2)
if self.n_parents > 1:
self._w, self._mu_eff = self._compute_weights()
self._e_chi = np.sqrt(self.ndim_problem)*( # E[||N(0,I)||]: expectation of chi distribution
1.0 - 1.0/(4.0*self.ndim_problem) + 1.0/(21.0*np.square(self.ndim_problem)))
.

I suspect there is a wrong tabulation at line 123.

Optimizers don't take into account lower and upper boundaries

As a test: I want to find min of function sum(x) for x between 0 and 1. I tryed to check it for all optimizers

The code which checks it:

import os
import pkgutil
import numpy
import importlib


def discover_optimizers(package_name):
    package = importlib.import_module(package_name)
    all_optimizers = []
    for _, module_name, is_pkg in pkgutil.iter_modules(package.__path__):
        if not is_pkg:
            continue
        full_module_name = f"{package_name}.{module_name}"
        subpackage = importlib.import_module(full_module_name)
        for _, class_name, _ in pkgutil.iter_modules(subpackage.__path__):
            try:
                if not class_name.startswith('_'):
                    full_class_name = f"{full_module_name}.{class_name}"
                    class_module = importlib.import_module(full_class_name)
                    class_obj = getattr(class_module, class_name.upper())
                    all_optimizers.append(class_obj)
            except:
                pass
    return all_optimizers

all_optimizers = discover_optimizers('pypop7.optimizers')
print(len(all_optimizers))

optim_func = lambda x: x.sum()
n_dim = 4

lower_boundary = numpy.array([0] * n_dim)
upper_boundary = numpy.array([1] * n_dim)

options = {
    'fitness_threshold': 1e-3,  # terminate when the best-so-far fitness is lower than this threshold
    'max_runtime': 300,  # 300 seconds (terminate when the actual runtime exceeds it)
    'seed_rng': 0,  # seed of random number generation (which must be explicitly set for repeatability)
    'verbose': 0,
    'max_function_evaluations': 100000,
    'sigma': 0.3
}
problem = {'fitness_function': optim_func,  # cost function
           'ndim_problem': n_dim,  # dimension
           'lower_boundary': lower_boundary,  # search boundary
           'upper_boundary': upper_boundary,
           'k': n_dim - 1}

for opt in all_optimizers:
    try:
        optimizer = opt(problem.copy(), options.copy())
        result = optimizer.optimize()
        if result:
            print(opt, result['best_so_far_x'])
    except:
        pass

The result:

<class 'pypop7.optimizers.bo.lamcts.LAMCTS'> [-0.6760339   0.3618667  -0.16212286 -0.02880972]
<class 'pypop7.optimizers.cc.cocma.COCMA'> [-0.47686997  0.53185109 -0.07582775 -0.0714384 ]
<class 'pypop7.optimizers.cc.coea.COEA'> [0.00231813 0.00659672 0.02236841 0.01236731]
<class 'pypop7.optimizers.cc.cosyne.COSYNE'> [0. 0. 0. 0.]
<class 'pypop7.optimizers.cc.hcc.HCC'> [-0.51623094 -0.14621922 -0.20562318  0.36384467]
<class 'pypop7.optimizers.cem.dscem.DSCEM'> [-0.39263308  0.25119954 -0.37545458  0.10072805]
<class 'pypop7.optimizers.cem.mras.MRAS'> [-0.07683651 -0.04483756 -0.06484939  0.10204932]
<class 'pypop7.optimizers.cem.scem.SCEM'> [-0.39263308  0.25119954 -0.37545458  0.10072805]
<class 'pypop7.optimizers.de.cde.CDE'> [ 0.16445318 -0.42179586 -0.09557912  0.04130309]
<class 'pypop7.optimizers.de.code.CODE'> [ 0.32137146 -0.99359753 -0.10802254  0.69684811]
<class 'pypop7.optimizers.de.jade.JADE'> [-0.17073779  0.15351694  0.27778093 -0.44524287]
<class 'pypop7.optimizers.de.shade.SHADE'> [ 0.13695033  0.07221861 -0.04688718 -0.39555022]
<class 'pypop7.optimizers.de.tde.TDE'> [ 0.46981607 -0.54584549 -0.20370663 -0.16033081]
<class 'pypop7.optimizers.ds.cs.CS'> [-0.1459572   0.10163917  0.00670629  0.16155334]
<class 'pypop7.optimizers.ds.gps.GPS'> [-0.88697404  0.79292833 -0.14605473  0.13879303]
<class 'pypop7.optimizers.ds.hj.HJ'> [-1.40047131  0.34461043  0.50683001  0.44171118]
<class 'pypop7.optimizers.ds.nm.NM'> [-0.83975216  0.06450609 -0.03964336  0.76778177]
<class 'pypop7.optimizers.ds.powell.POWELL'> [6.61069614e-05 6.61069614e-05 6.61069614e-05 4.53103854e-04]
<class 'pypop7.optimizers.eda.aemna.AEMNA'> [ 0.11252365 -0.18191406 -0.07250522  0.13927807]
<class 'pypop7.optimizers.eda.emna.EMNA'> [-0.01052653 -0.03288489  0.33220645  0.10134024]
<class 'pypop7.optimizers.eda.rpeda.RPEDA'> [0. 0. 0. 0.]
<class 'pypop7.optimizers.eda.umda.UMDA'> [-0.35342327  0.27947705 -0.37516572  0.10423779]
<class 'pypop7.optimizers.ep.cep.CEP'> [-1.52930751  0.46344768  0.14693299  0.63753879]
<class 'pypop7.optimizers.ep.fep.FEP'> [ 0.20749682  0.18527865  0.79435323 -2.94729542]
<class 'pypop7.optimizers.ep.lep.LEP'> [ 1.10558115  0.24239287 -4.9921671   0.69668987]
<class 'pypop7.optimizers.es.ccmaes2009.CCMAES2009'> [ 0.20749043 -0.8293654   0.24668735  0.23266953]
<class 'pypop7.optimizers.es.ccmaes2016.CCMAES2016'> [ 0.13678102 -0.69954803  0.32041095  0.13767426]
<class 'pypop7.optimizers.es.cmaes.CMAES'> [-0.28272347 -1.41480348  0.95616745 -0.40147134]
<class 'pypop7.optimizers.es.csaes.CSAES'> [-0.05864292 -0.19553882  0.29017808 -0.19070991]
<class 'pypop7.optimizers.es.ddcma.DDCMA'> [-0.02227496 -1.06207523  0.7854238   0.16832468]
<class 'pypop7.optimizers.es.dsaes.DSAES'> [ 0.30795104 -0.47174714 -0.784923    0.19072515]
<class 'pypop7.optimizers.es.fcmaes.FCMAES'> [ 0.24055367 -0.43308581  0.24361855 -0.28491516]
<class 'pypop7.optimizers.es.fmaes.FMAES'> [ 0.24110199 -0.82529545  0.27054464  0.21301834]
<class 'pypop7.optimizers.es.lmcma.LMCMA'> [-0.21275055 -0.66336508 -0.39555834  0.0026348 ]
<class 'pypop7.optimizers.es.lmcmaes.LMCMAES'> [ 0.63591624 -0.06963237 -0.70116549 -0.23170578]
<class 'pypop7.optimizers.es.lmmaes.LMMAES'> [-0.7468732   0.54044971 -0.83124089  0.28553827]
<class 'pypop7.optimizers.es.maes.MAES'> [ 0.24110199 -0.82529545  0.27054464  0.21301834]
<class 'pypop7.optimizers.es.mmes.MMES'> [ 0.18658459 -0.21728094 -0.22269701  0.1241464 ]
<class 'pypop7.optimizers.es.opoa2010.OPOA2010'> [ 0.78447352 -1.64450233  0.65211289 -0.41848015]
<class 'pypop7.optimizers.es.opoa2015.OPOA2015'> [ 0.64382685 -1.52102046  0.6747694  -0.45705725]
<class 'pypop7.optimizers.es.opoc2006.OPOC2006'> [ 1.09949707 -1.95056972  0.80188148 -0.50072606]
<class 'pypop7.optimizers.es.opoc2009.OPOC2009'> [ 0.78447352 -1.64450233  0.65211289 -0.41848015]
<class 'pypop7.optimizers.es.r1es.R1ES'> [ 0.01613307 -0.2390732  -0.1692492   0.0223703 ]
<class 'pypop7.optimizers.es.res.RES'> [ 1.18489362 -1.14707295  0.31712093 -0.39674864]
<class 'pypop7.optimizers.es.rmes.RMES'> [-0.52414836  0.35839744 -0.43697971 -0.25157316]
<class 'pypop7.optimizers.es.saes.SAES'> [ 0.37349651 -0.55888947  0.50092362 -0.50894695]
<class 'pypop7.optimizers.es.samaes.SAMAES'> [ 0.53545301 -0.73719703  0.65899345 -0.60428368]
<class 'pypop7.optimizers.es.sepcmaes.SEPCMAES'> [ 0.11559861 -1.04378521  0.23793796  0.2807491 ]
<class 'pypop7.optimizers.es.ssaes.SSAES'> [-0.43675648 -0.47365778 -0.01960891  0.48671414]
<class 'pypop7.optimizers.es.vdcma.VDCMA'> [ 1.36058364 -0.71217782 -0.44076356 -0.2856638 ]
<class 'pypop7.optimizers.es.vkdcma.VKDCMA'> [ 0.21444881 -0.88568877  0.32169121  0.25377752]
<class 'pypop7.optimizers.ga.g3pcx.G3PCX'> [-0.15269955  0.6733837  -0.18691776 -0.36552076]
<class 'pypop7.optimizers.ga.genitor.GENITOR'> [0.16925372 0.57813239 0.09181329 0.0330481 ]
<class 'pypop7.optimizers.ga.gl25.GL25'> [3.69095500e-04 4.19647735e-06 1.50494419e-04 2.18297760e-04]
<class 'pypop7.optimizers.nes.enes.ENES'> [ 1.81492    -2.57232933  1.48505313 -1.07211646]
<class 'pypop7.optimizers.nes.ones.ONES'> [ 1.81492    -2.57232933  1.48505313 -1.07211646]
<class 'pypop7.optimizers.nes.r1nes.R1NES'> [-3.70034242  0.60952904 -0.9282294   2.74587766]
<class 'pypop7.optimizers.nes.sges.SGES'> [ 1.81492    -2.57232933  1.48505313 -1.07211646]
<class 'pypop7.optimizers.nes.snes.SNES'> [-0.03319505 -0.71717639  0.48269242  0.10121638]
<class 'pypop7.optimizers.nes.xnes.XNES'> [ 1.81492    -2.57232933  1.48505313 -1.07211646]
<class 'pypop7.optimizers.pso.clpso.CLPSO'> [ 0.09216473  0.17694329 -0.15585932 -0.12475649]
<class 'pypop7.optimizers.pso.cpso.CPSO'> [-0.23658014 -0.07414678  0.1004639   0.20766364]
<class 'pypop7.optimizers.pso.ipso.IPSO'> [0. 0. 0. 0.]
<class 'pypop7.optimizers.pso.spso.SPSO'> [-0.41952165  0.17352028 -0.12113851  0.32384185]
<class 'pypop7.optimizers.pso.spsol.SPSOL'> [-0.22547512 -0.17689092 -0.01380493  0.38978217]
<class 'pypop7.optimizers.rs.bes.BES'> [-0.02723131 -0.08054957  0.08467001  0.02035118]
<class 'pypop7.optimizers.rs.gs.GS'> [ 0.01553554 -0.2359458   0.03920435  0.07267143]
<class 'pypop7.optimizers.rs.prs.PRS'> [0.01161162 0.06071727 0.00665576 0.02123513]
<class 'pypop7.optimizers.rs.rhc.RHC'> [ 0.66297826 -1.36597693  0.58673108 -0.29628104]
<class 'pypop7.optimizers.rs.srs.SRS'> [-0.283229    0.44578372 -0.47789181 -0.04463705]
<class 'pypop7.optimizers.sa.esa.ESA'> [0. 0. 0. 0.]

No module named 'pypop7.optimizers.bo'

Thank you for providing such an invaluable collection. I have checked the "setup.cfg" and found that Bayesian Optimization (BO) is not included yet (No module named 'pypop7.optimizers.bo' and LAMCTS is not available). Please add BO so that we can try this method using pypop7.

Feature request. Early stopping

Good day. If it possible to add 'early_stopping' feature? Sometimes I want to stop the process if it's stuck.
I can add it.

How to get candidate solutions from each iteration?

Thanks for the excellent work 🥳

  1. I'm looking for black-box optimization algorithms to perform prompt tuning on my neural network, which requires candidate solutions from each iteration.
  2. Following the “ask-tell” form of pycma, the pseudo-code is shown as below:
while not es.stop():
    solutions = es.ask()
    fitness = [cma.ff.rosen(s) for s in solutions]
    es.tell(solutions, fitness)
  1. Question: How can I obtain the solutions from each iteration to calculate new fitness (e.g. loss functions)?

Parllelization

Hello,

is it somehow possible to run the algorithms on multiple CPUS, since most Laptops these days have multiple core.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.