Revisiting the usual practices
Conducting hyperparameter tuning experiments in a small-scale project may seem straightforward. We can easily do several iterations of experiments and write all the results in a separate document. We can log the details of the best set of hyperparameter values (or the tested set of hyperparameters if we perform a manual search method, as shown in Chapter 3, Exhaustive Search), along with the evaluation metric, in each experiment iteration. By having an experiment log, we can learn from the history and define a better hyperparameter space in the next iteration of the experiment.
When we adopt the automated hyperparameter tuning method (all the methods we’ve discussed so far besides the manual search method), we can get the final best set of hyperparameter values directly. However, this is not the case when we adopt the manual search method. We need to test numerous sets of hyperparameters manually. Several practices are adopted by the community...