Auto-optimizing a neural net (and its architecture) on the CIFAR-100 dataset. Could be easily transferred to another dataset or another classification task.
Hi,
When you analyze the results in the notebook, you plot accuracy = [1.0/100] + neural_net["history"]["fine_outputs_acc"] and call it the test accuracy however, it seems to me that it is the training accuracy.
I think you should plot accuracy = [1.0/100] + neural_net["history"]["val_fine_outputs_acc"] to obtain the test accuracy.
Did I miss something? Are you more interested in the training accuracy in fucntion of the hyper-parameters than the test (or validation) accuracy?
Thanks in advance
Edit : I would have liked to creat the issue in the Vooban repo but I can't..
Hi, really enjoying this repository. Your work is well-done and has gotten me interested in some practical ways to automate hyperoptimization. I was wondering if you might add some information to the readme to give folks a sense for how long a parameter study this extensive takes, and what kind of hardware you used. What kind of GPUs, and were any parts parallelizable? Did it take on the order of hours, days, or weeks?