Introducing BO
BO is categorized as an informed search hyperparameter tuning method, meaning the search is learning from previous iterations to have a (hopefully) better subspace in the next iterations. It is also categorized as the sequential model-based optimization (SMBO) group. All SMBO methods work by sequentially updating probability models to estimate the effect of a set of hyperparameters on their performance based on historical observed data, as well as suggesting new hyperparameters to be tested in the following trials.
BO is a popular hyperparameter tuning method due to its data-efficient property, meaning it needs a relatively small number of samples to get to the optimal solution. You may be wondering, how exactly does BO get this ground-breaking data-efficient property? This property exists thanks to BO’s ability to learn from previous iterations. BO can learn and predict which subspace is worth visiting in the future by utilizing a probabilistic regression...