Reference: Kaiping Luo. Enhanced grey wolf optimizer with a model for dynamically estimating the location of the prey. Applied Soft Computing. 2019,77 :225-235.
Variables | Meaning |
---|---|
pop | The number of wolves |
iter | The iteration number |
lb | The lower bound (list) |
ub | The upper bound (list) |
pos | The set of wolves (list) |
score | The score of wolves (list) |
dim | Dimension (list) |
alpha_score | The score of the alpha wolf |
alpha_pos | The position of the alpha wolf (list) |
beta_score | The score of the beta wolf |
beta_pos | The position of the beta wolf (list) |
delta_score | The score of the delta wolf |
delta_pos | The position of the delta wolf (list) |
prey_pos | The dynamically estimating position of the prey (list) |
gbest | The score of the global best score |
gbest_pos | The position of the global best (list) |
iter_best | The global best score of each iteration (list) |
con_iter | The last iteration number when "gbest" is updated |
if __name__ == '__main__':
# Parameter settings
pop = 50
iter = 2000
lb = [0, 0, 10, 10]
ub = [99, 99, 200, 200]
print(main(pop, iter, lb, ub))
The EGWO converges at its 1,271-th iteration, and the global best value is 8050.913534658795.
{
'best score': 8050.913534658795,
'best solution': [1.3005502034963052, 0.6428626394484327, 67.3860209065443, 10.0],
'convergence iteration': 1271
}
The GWO code: https://github.com/Xavier-MaYiMing/Grey-Wolf-Optimizer
If the global optimal solution shifts away from the origin of the coordination, the performance of the original GWO deteriorates sharply. The EGWO solves this problem, and its performance on shifted functions proves its effectiveness.