- change the term "time" in training time, construction time to "speed".
- change the FileName XORUtil.py to XORTrainUtil.py
- allow printing of result in the target trainer functions
- change the FileName XORTrainerFunc to XORTargetTrainerFunctions
- allow contrain argument set for the funcs in XORTargetTrainerFunctions
- allow the baysian_*_logs.json to be organized in a folder
jeffrey82221 / hyperparametertuningexperiment Goto Github PK
View Code? Open in Web Editor NEWThe goal of this experiment is to compare the effectiveness and efficiency of different hyper-parameter tuning schemes, such as grid search, random search, Baysian optimization, and population-based hyper-parameter tuning. We will leverage a package called, Ray, which support hyper parameter tuning with parallel computation, for speeding up the experiment. We focused on a simple XOR problem, which is a function that required a neural network of only a few layers (two is sufficient). The basic hyper-parameters considered in this experiment are: 1. Batch size, 2. Learning rate, 3. Momentum, 4. Layer count, and 5. Layer Size.
Home Page: https://www.evernote.com/l/AFmfWhOhhENMcoTeYrTgz8v9E7Tk2SvKkB4