Giter VIP home page Giter VIP logo

pymetaheuristic's Introduction

pyMetaheuristic

Introduction

pyMetaheuristic is a robust Python Library crafted to provide a wide range of metaheuristic algorithms, ideal for tackling complex optimization tasks. It encompasses a diverse mix of algorithms, from traditional to modern methods. For a detailed list of these metaheuristics and their demonstrations, refer to Section 3. The library is also equipped with a selection of test functions, useful for benchmarking and evaluating algorithm performance. Details on these functions can be found in Section 4. Getting started with pyMetaheuristic is straightforward. Install the package using pip and begin exploring the available algorithms and test functions, as outlined in Sections 1 and 2. Whether you're addressing intricate optimization problems or experimenting with different algorithms, pyMetaheuristic is your essential toolkit.

Each metaheuristic includes two parameters: 'start_init' and 'target_value'. By default, 'start_init' is set to None, but it can be assigned an initial guess in the form of a NumPy array. The default value of 'target_value' is also None. However, you can specify a continuous value for it. If during the iterations, the objective function reaches a value that is equal to or less than this specified 'target_value', the iterations will halt.

Usage

  1. Install
pip install pyMetaheuristic
  1. Import
# Import PSO
from pyMetaheuristic.algorithm import particle_swarm_optimization

# Import a Test Function. Available Test Functions: https://bit.ly/3KyluPp
from pyMetaheuristic.test_function import easom

# OR Define your Custom Function. The function input should be a list of values, 
# each value representing a dimension (x1, x2, ...xn) of the problem.
import numpy as np
def easom(variables_values = [0, 0]):
    x1, x2     = variables_values
    func_value = -np.cos(x1)*np.cos(x2)*np.exp(-(x1 - np.pi)**2 - (x2 - np.pi)**2)
    return func_value

# Run PSO
parameters = {
    'swarm_size': 250,
    'min_values': (-5, -5),
    'max_values': (5, 5),
    'iterations': 500,
    'decay': 0,
    'w': 0.9,
    'c1': 2,
    'c2': 2,
    'verbose': True,
    'start_init': None,
    'target_value': None
}
pso = particle_swarm_optimization(target_function = easom, **parameters)

# Print Solution
variables = pso[:-1]
minimum   = pso[ -1]
print('Variables: ', np.around(variables, 4) , ' Minimum Value Found: ', round(minimum, 4) )

# Plot Solution
from pyMetaheuristic.utils import graphs
plot_parameters = {
    'min_values': (-5, -5),
    'max_values': (5, 5),
    'step': (0.1, 0.1),
    'solution': [variables],
    'proj_view': '3D',
    'view': 'browser'
}
graphs.plot_single_function(target_function = easom, **plot_parameters)
  1. Colab Demo

Try it in Colab:

  1. Test Functions

Multiobjective Optimization or Many Objectives Optimization

For Multiobjective Optimization or Many Objectives Optimization try pyMultiobjective

TSP (Travelling Salesman Problem)

For Travelling Salesman Problems, try pyCombinatorial

Acknowledgement

This section is dedicated to everyone who helped improve or correct the code. Thank you very much!

pymetaheuristic's People

Contributors

myraiser avatar valdecy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pymetaheuristic's Issues

visualization

Thank you very much for your work, learned a lot from your code, recommended to add value visualization content

Bounds not observed in some routines

Hello, this is such a wonderful package. I really appreciate your work on this. I have noticed that in some routines the bounds are not observed (i.e. one of the particles will be set outside of min/max values.) One such case is for the moth flame routine and it looks like an np.clip has been left out in the update_position routine:

position[i,j] = flame_distance*math.exp(b_constant rnd_2)math.cos(rnd_22math.pi) + flames[i,j]

should be

position[i,j] = np.clip(flame_distance*math.exp(b_constant rnd_2)math.cos(rnd_22math.pi) + flames[i,j], min_values[j], max_values[j])

as per the alternate position case. Please would you be so kind as to fix this?

References

Hi,
thank you very much for this nice Python library, it seems very useful!
However, could you please add references to the exact papers your implementations are based on? That would be very helpful.

Cheers,
Paul

Question about food and best dragon update in DA algorithm.

I am working to minimise a function according to typical criteria in feature selection (a mix between accuracy and reduction rate).

I find that the DA algorithm is minimising all features to 0. I've been investigating, specifically in the 'update_food' function, when I get to the if (dimensions == dragonflies.shape[1] - 2): for k in range(0, dragonflies.shape[1]-2): food_position[0,k] = np.clip(food_position[0,k] - dragonflies[i,k], min_values[k], max_values[k]) I'm seeing that the dragonfly and the food being subtracted have the same values. This makes the food all 0's, in the following iterations when multiplying 0 by Levy and adding it to the food[0,k] it is still 0. I don't know what makes the subtracted dragonfly exactly the same, what I do know is that it is marked as the best solution and doesn't come out of it again.
It happens with many of the datasets I have tried.

Questions about Food/Predator update strategy in Dragonfly Algorithm

Great repo! I'm recently researching on the Dragonfly Algorithm, and try to implement it. For the README link to original paper, it seems that the original paper should be "Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems" published May 2015, https://link.springer.com/article/10.1007/s00521-015-1920-1 (if the "original paper" refers to the paper which proposed the algorithm, not the paper which help implemented the algorithm)


I have read through the "origin" one and the one mentioned in the README (roughly), it seems both paper didn't introduce how to update the food and predator in detail.

Environment

I am working on the source code instead of using it as package since I am trying to figure out what is wrong. I am applying an SVM as my target function, and return (1-accuracy) as the DA seems to minimize the target function instead of maximize it. And the seach space is on hyperparameter C (1, 1000) and Sigma (the RBF kernel, (1,100)).

Problem:

For C and Sigma, the value should obviously be positive, however, the algorithm will update the Food to 0 and send it to the target function in some case (demensions != dragonflies.shape[1]-1, no neighbors?), I am confused on why should we place food to 0 (and 1-d zero), is that trying to remove the food attraction impact on the velocity update? Sending a 0 to target function may violates the min and max restriction isn't it?

If I just misunderstood any line or the usage of the function please let me know, appreciate if you can explain or figure out my mistake, thanks!

Questions to ask

Blogger hello,What causes this error when using your project?No module named 'pyMetaheuristic.algorithm'; 'pyMetaheuristic' is not a package

Lack of improvent when training arff datasets

I am using the goa.py algorithm for optimization tasks, specifically training on an ARFF dataset. However, I've noticed that the algorithm does not show improvement along iterations when using this dataset.
This is the fitness landscape during the optimization process to provide visual evidence of the lack of improvement. Is ionosphere dataset, but the fitness curves are the same for other datasets.
KNN_fitness_5_fold_cross_validation_goa_real_ionosphere

Do you have an update plan?

Thank you for sharing!
Do you have an update plan? I hope this tool can include more new heuristic algorithms.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.