This talk was given at Metis (San Francisco) on October 11, 2017. The livestream can be viewed here: https://livestream.com/metis/events/7706066
This material builds on what you already know about parameter tuning to dive deeper into the world of stochastic optimization. In particular, we’ll explore a range of methods, including stochastic gradient descent, simulated annealing, and particle swarm optimization, and cultivated intuition on how these approaches work, when they can be applied, what their underlying topologies look like, and how you can get the best performance out of them.
Stochastic Multi-Armed Bandits
New approaches for solving the Probabilistic Traveling Salesman Problem
A Practical Guide to Support Vector Classification
Support Vector Machines for Business Applications
Statistical Performance of Support Vector Machines
A stochastic optimization approach for parameter tuning of Support Vector Machines
SVM Parameters Tuning with Quantum Particles Swarm Optimization
Perspective: Energy Landscapes for Machine Learning
Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms