Giter VIP home page Giter VIP logo

moeadr's Introduction

MOEADr package

Build Status CRAN_Status_Badge CRAN Downloads


Felipe Campelo Department of Computer Science
Aston University
Birmingham, UK

Lucas Batista
Operations Research and Complex Systems Laboratory - ORCS Lab
Universidade Federal de Minas Gerais
Belo Horizonte, Brazil

Claus Aranha
Faculty of Engineering, Information and Systems
University of Tsukuba
Tsukuba, Japan


R package containing a component-based, modular implementation of the Multiobjective Evolutionary Algorithm with Decomposition (MOEA/D) framework.

The MOEA/D framework is seen as a combination of specific design decisions regarding several independent modules:

  • Decomposition strategy;
  • Aggregation function;
  • Objective scaling strategy;
  • Neighborhood assignment strategy;
  • Variation Stack;
  • Update strategy;
  • Constraint handling method;
  • Termination criteria.

This package provides several options for each module, as explained in the documentation of its main function, MOEADr::moead(). The input structure of this function is also explained in its documentation. More details on the component-based approach behind the MOEADr package are available in our paper, The MOEADr Package - A Component-Based Framework for Multiobjective Evolutionary Algorithms Based on Decomposition, available on the ArXiv: https://arxiv.org/abs/1807.06731.

To install the current release version in your system, simply use:

install.packages("MOEADr")

For the most up-to-date development version, install the github version using:

# install.packages("devtools")
devtools::install_github("fcampelo/MOEADr")

Example

As a simple example, we can reproduce the original MOEA/D (Zhang and Li, 2007) and run it on a 30-variable ZDT1 function:

 ## 1: prepare test problem
 library(smoof)
 ZDT1 <- make_vectorized_smoof(prob.name  = "ZDT1",
                               dimensions = 30)

 ## 2: set input parameters
 problem   <- list(name       = "ZDT1",
                   xmin       = rep(0, 30),
                   xmax       = rep(1, 30),
                   m          = 2)
 decomp    <- list(name       = "SLD", H = 99)
 neighbors <- list(name       = "lambda",
                   T          = 20,
                   delta.p    = 1)
 aggfun    <- list(name       = "wt")
 variation <- list(list(name  = "sbx",
                        etax  = 20, pc = 1),
                   list(name  = "polymut",
                        etam  = 20, pm = 0.1),
                   list(name  = "truncate"))
 update    <- list(name       = "standard", 
                   UseArchive = FALSE)
 scaling   <- list(name       = "none")
 constraint<- list(name       = "none")
 stopcrit  <- list(list(name  = "maxiter",
                     maxiter  = 200))
 showpars  <- list(show.iters = "dots",
                   showevery  = 10)
 seed      <- NULL

 ## 3: run MOEA/D
 out1 <- moead(problem = problem, 
               decomp = decomp, aggfun = aggfun, neighbors = neighbors, variation = variation, 
               update = update, constraint = constraint, scaling = scaling, stopcrit = stopcrit,
               showpars = showpars, seed = seed)

 ## 3.1: For your convenience, you can also use the preset_moead() function to reproduce the above setup, 
 ##      and only modify the desired parts:
 
 out2 <- moead(problem = problem,
               preset = preset_moead("original"), 
               stopcrit = list(list(name = "maxiter", maxiter = 1000)),
               showpars = showpars, seed = 42)

 # 4: Plot output:
 plot(out1$Y[,1], out1$Y[,2], type = "p", pch = 20)

Have fun!
Felipe

moeadr's People

Contributors

caranha avatar fcampelo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

moeadr's Issues

Fix decomposition_uniform [PRIORITY]

Integration tests detected a problem with decomposition_uniform() that must be fixed.

Recall that all decomposition methods must return an [N x m] weight matrix W with all rows summing to unity.

Assume that decomp is defined as:

decomp <- list(name = "uniform", N = 10, .nobj = 2)

Running decomposition_uniform(decomp) yields:

> decomposition_uniform(decomp)
     Var1   Var2     Var3       Var4         Var5         Var6         Var7         Var8
[1,] 0.95 0.0425 0.005625 0.00121875 0.0003609375 0.0001328906 5.684766e-05 2.639355e-05
            Var9        Var10      VarLast
[1,] 1.18771e-05 3.365178e-06 6.393839e-05

A single vector (instead of the expected [10 x 2] matrix).

This problem appears not to happen for .nobj > 2. For instance, if we have 3 objectives:

> decomp$.nobj <- 3
> decomposition_uniform(decomp)
            Var1       Var2    VarLast
 [1,] 0.77639320 0.16770510 0.05590170
 [2,] 0.61270167 0.17428425 0.21301408
 [3,] 0.50000000 0.07500000 0.42500000
 [4,] 0.40839202 0.50286678 0.08874120
 [5,] 0.32917961 0.36895122 0.30186918
 [6,] 0.25838015 0.18540496 0.55621489
 [7,] 0.19377423 0.76591449 0.04031129
 [8,] 0.13397460 0.56291651 0.30310889
 [9,] 0.07804555 0.32268406 0.59927039
[10,] 0.02532057 0.04873397 0.92594546
> rowSums(decomposition_uniform(decomp))
 [1] 1 1 1 1 1 1 1 1 1 1

@caranha , since you implemented this routine, could you please look into it? I have temporarily removed decomposition_uniform() from the @exports list, until we can sort it out.

implement Violation Handling

  • Add function to moead to calculate violation penalties defined by the user (after evaluation)
  • Add sample violation penalty functions (box constraints and unitary constraints)
  • Re-structure update_population to calculate ordering index based on bigZ and violation values
  • Implement violation handling functions
    • penalty
    • tournament selection
    • stochastic
    • strict

Verify calculation of the tchebycheff norm

Tchebycheff norm is defined as the maximum absolute value of a vector

http://mathworld.wolfram.com/L-Infinity-Norm.html
http://webber.physik.uni-freiburg.de/~hon/vorlss02/Literatur/StatistikGlossar/node29.html
https://en.wikipedia.org/wiki/Uniform_norm

Check whether internal calculations are using absolute values or signed values. This shouldn't be a problem (as the arguments of the norm should always be non negative for the MOEA/D), but it is important to keep these things straight.

WISHLIST

  • VARIATION:

    • Implement more variation operators (incl. local search)
  • NEIGHBORHOOD DEFINITION:

    • Un-hardcode it (modularize for easier upgrading)
  • POPULATION GENERATION:

    • Un-hardcode it (modularize for easier upgrading)
  • SCALARIZATION:

    • Verify whether iPBI is correct (check with Hiroyuki Sato -- Institute of Electro-Communications in Tokyo)
  • OBJECTIVE SCALING:

    • Implement more alternatives of scaling (e.g., Deb's)
  • OUTPUT:

    • Logging hooks for user-defined variation operators
    • include output options (e.g., log file, all visited points etc.)

Package "smoof" should not be necessary for build?

Package smoof has a ton of dependencies, and if I understand it correctly, is only used in the examples/case studies.

So I think it would be good for future users if this package was recommended, but not necessary for building/installing MOEADr

Found bug

A bug was found when calling the summary function from the summary.moead.R when giving the ref.points as parameters. The var "Y" is not defined alone in any context, but its in the "object" variable.

In that file, the assertthat:: function should be updated to

assertthat::is.count(ndigits),
nullRP || is.numeric(ref.point),
nullRP || length(ref.point) == ncol(object$Y),
nullRF || is.numeric(ref.front),
nullRF || ncol(ref.front) == ncol(object$Y))

Manual initialisation + save/resume

Suggested by Jochen Papenbrock (via email):

Hi Felipe,
is it possible to generate an initial population for MOEADr?
Many optimisers also allow some iterations and a later restart of iterations with the rounds computed so far.
Also, it would be helpful to parallelise with many CPU, e.g. the fitness evaluation.
Best,
Jochen

TODO: Update

  • implement other update strategies:
    • updt restricted
    • updt best
  • Choose and implement some constraint handling methods
    • method 1
    • method 2

Suggestions for more stop criterias

Some simple to program stop criterias, for adding after the paper is submitted:

  • diversity based stop criteria (number of identical solutions, or distance between new and old population)
  • utility based stop criteria (utility threshold of stopping the run, or delta utility between previous iteration and current one)

Evaluation stop criteria is not exact

Stop criteria is tested once every iteration. If we select the number of evaluations stop criteria, we may do more than the defined number of evaluations.

For example, suppose the stop criteria is 1000 function evaluations, and we have 300 subproblems. The program will stop at iteration 4, after 1200 evaluations have been performed.

For a very large total number of function evaluations, this may not matter much. But for more costly problems, it may become an issue.

Unfortunately, I'm not sure this is treatable with how we have structured the code and components: The two ways I know to deal with this is to truncate the last population (not possible if # of population == # of subproblems), or to bail out of the program, which may require us to set up some sort of observer in R.

Not urgent, but we will definitely want to tackle this problem in future versions of the package.

Case Study Two

Use the Irace package to find out the optimal combination of components for a given problem.
Report on the results.

This one is basically the case study of the ITEC paper. Most of the code is already done. Just need
to be tested with recent changes, and write the vignette.

  • Test that the code still works
  • Separate the case study in two parts: irace configuration (long running time) and examination of the results (short running time, already know the results in advance, or from first part)
  • Write Vignette (same text as in the JSS manuscript).

Implement Variation Strategies

  • Check Correctness of SBX
  • Check Correctness of Polynomial Mutation
  • Implement Binomial Recombination
  • Implement Differential Mutation
    • Implement fixed Phi variation
    • Implement random Phi variation
    • Implement random basis variation
    • Implement mean basis variation
    • Implement weighted mean basis variation
  • Tests for the variation strategies

Add tests

  • Condition checks for variation operators (is everyone between 0 and 1? is the output a matrix of the right size? Is the behavior correct depending on parameters?
  • Value checks for variation operatos (use fixed seeds, and some fixed examples)
  • Condition checks for repair operators (is everyone between 0 and 1? is the output a matrix of the right size?)
  • Value checks for repair operators (to verify they don't change parameters where not necessary)

Case Study Three

Create a custom Variation function/Decomposition function, and execute MOEAD using this custom
function.

  • test and confirm that it is possible to use a Variation/Decomposition component from the environment or from the filesystem without recompiling the package.
  • if the test above fails, modify the code so that it succeeds.
  • write a simple optimization exercise (similar to case 1), using the custom variation/decomposition
    component.
  • write vignette (same text as JSS paper)

TODO: Variation

  • Transformar Repair para Variation
  • Modificar o perform variation para nao chamar mais o Repair separadamente
  • Local search operator: Choose and implement some local search operators

Implement Scalarization Strategies

Implement the missing scalarization strategies

  • ws is just multiplying the matrixes, without subtracting the min
  • wt is actually ws, without the Tchebycheff norm
  • awt is missing
  • double check pbi calculations (seems to be missing a norm division in D2)

Also, all scalarization functions seem to be missing the min step which is included in the paper.

Check Uniform Design

by Lucas Batista:

Felipe, boa tarde!

Tive que implementar o Uniform Design nesse sábado e, após pelejar com um certo problema, deu tudo certo. Acho que o Claus deve ter passado pelo mesmo problema, mas talvez não o tenha percebido. O fato é que todas aquelas referências que usamos apresentam uma ordem inversa de um dos cálculos, o que compromete a identificação correta do melhor conjunto de vetores referência.

De forma geral, deve-se definir U, tal que U = (U-0.5)/N, antes de calcular a discrepância CD2. Entretanto, olhando rapidamente o código no CRAN, me parece que isso só é feito após o cálculo de CD2.

Depois conversamos ao longo da semana.

Abs,

#wishlist speed up Uniform Decomposition

There is just so much speed up that can be done, since it is O(N^3*nobj). But I believe we can still make it a bit faster. Here is the output from Rprof:

> Rprof(tmp<-tempfile())
> x <- decomposition_uniform(list(N=100,.nobj=4))
> Rprof()
> summaryRprof(tmp)$by.total
                               total.time total.pct self.time self.pct
"decomposition_uniform"            163.56     99.99      0.00     0.00
"apply"                            163.54     99.98    105.98    64.79
"which.min"                        163.54     99.98      0.00     0.00
"FUN"                              163.50     99.95     26.50    16.20
"cd2"                              163.44     99.91      0.04     0.02
"sapply"                           161.42     98.68      0.06     0.04
"lapply"                           160.18     97.92      1.42     0.87
"aperm"                              6.80      4.16      3.80     2.32
"matrix"                             5.64      3.45      4.38     2.68
"unlist"                             3.94      2.41      3.94     2.41
"aperm.default"              
(... skipped the rest ...)

Verify iPBI

Check if the weird performance observed for the inverted PBI is a side-effect of the estimation of the nadir point. While it makes sense to estimate the ideal point from the current population, the estimation of the nadir point may require a modification, so that it contains the "historic" worst function values instead of the "current" worst values.

  • Check with Lucas if that makes sense.
  • Verify experimentally

Add "devel" branch

I think we should add a "devel" branch. The idea would be for one of us to commit changes to "devel" first, then move to "master" after the commit is checked by another developer (currently its just two of us, but hopefully others will join us in the future).

Whatcha think?

Different Initialization Methods

As the package current is implemented, the only initialization method (for continuous problems) is hard-coded. Thus, there is no way of incorporating new initialization methods, for example for discrete problems.

Suggestion: implement initialization as a component.

decomposition_sld generates array with useless row numbers

Not sure if this is a problem or not, but the row names of the weight array returned by decomposition_sld is meaningless (side effect of rbind, probably).

You can fix this with rownames(X) <- NULL if necessary

Example

% p <- list(H=4,.nobj=3)
% decomposition_sld(p)

Var1 Var2 VarLast
1 0.00 0.00 1.00
2 0.25 0.00 0.75
3 0.50 0.00 0.50
4 0.75 0.00 0.25
5 1.00 0.00 0.00
6 0.00 0.25 0.75
7 0.25 0.25 0.50
8 0.50 0.25 0.25
9 0.75 0.25 0.00
11 0.00 0.50 0.50

12 0.25 0.50 0.25
13 0.50 0.50 0.00
16 0.00 0.75 0.25
17 0.25 0.75 0.00
21 0.00 1.00 0.00

TODO: Documentation

  • Write Vignette explaining how to define a MOEADr-compliant problem
  • Write vignette explaining the variation stack
  • Finish documentation of function moead()
  • Thorough review of documentation
  • Update vignette "Writing functions for MOEADr" to reflect recent changes (local search, constraint handling, changes in variable passing etc.)

Case Study One

Execute a simple multi-objective optimization using a default MOEA/D configuration.
Report the significant results.

Requirements

  • Function to report results
  • Trivial example of problem function (do not use smoof so it is easier for the reader to understand)
  • Change output object to respond to R standard inspection functions (summary, plot)

Size of Decomposition Matrix on "writing-functions for moeadr"

So, looking at the documentation at "writing functions", I noticed that the return value for Decomposition Strategies states that:

the function must output a _N x m_ matrix of weights, with _N_ the population size and _m_ the number of objectives.

This is not true for SLD, MSLD, or Uniform, both in the paper, and in the code. For all these three, "N" is he number of sub problems calculated by the decomposition method based on the user parameters, which (I guess) should be <= population size, but not necessarily equal to population size.

Is the documentation wrong, or the code wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.