#### Overview of this book

Preface
Section 1:The Methods
Free Chapter
Chapter 1: Evaluating Machine Learning Models
Chapter 2: Introducing Hyperparameter Tuning
Chapter 3: Exploring Exhaustive Search
Chapter 4: Exploring Bayesian Optimization
Chapter 5: Exploring Heuristic Search
Chapter 6: Exploring Multi-Fidelity Optimization
Section 2:The Implementation
Chapter 7: Hyperparameter Tuning via Scikit
Chapter 8: Hyperparameter Tuning via Hyperopt
Chapter 9: Hyperparameter Tuning via Optuna
Chapter 10: Advanced Hyperparameter Tuning with DEAP and Microsoft NNI
Section 3:Putting Things into Practice
Chapter 11: Understanding the Hyperparameters of Popular Algorithms
Chapter 12: Introducing Hyperparameter Tuning Decision Map
Chapter 13: Tracking Hyperparameter Tuning Experiments
Chapter 14: Conclusions and Next Steps
Other Books You May Enjoy

# Discovering Leave-One-Out cross-validation

Essentially, Leave One Out (LOO) cross-validation is just k-fold cross-validation where k = n, where n is the number of samples. This means there are n-1 samples for the training set and 1 sample for the validation set in each fold (see Figure 1.3). Undoubtedly, this is a very computationally expensive strategy and will result in a very high variance evaluation score estimator:

Figure 1.3 – LOO cross-validation

So, when is LOO preferred over k-fold cross-validation? Well, LOO works best when you have a very small dataset. It is also good to choose LOO over k-fold if you prefer the high confidence of the model's performance estimation over the computational cost limitation.

Implementing this strategy from scratch is actually very simple. We just need to loop through each of the indexes of data and do some data manipulation. However, the Scikit-Learn package also provides the implementation for LOO, which we can use:

```from sklearn.model_selection import train_test_split, LeaveOneOut
df_cv, df_test = train_test_split(df, test_size=0.2, random_state=0)
loo = LeaveOneOut()
for train_index, val_index in loo.split(df_cv):
df_train, df_val = df_cv.iloc[train_index], df_cv.iloc[val_index]
#perform training or hyperparameter tuning here```

Notice that there is no argument provided in the `LeaveOneOut` function since this strategy is very straightforward and involves no stochastic procedure. There is also no stratified version of the LOO since the validation set will always contain one sample.

Now that you are aware of the concept of LOO, in the next section, we will learn about a slight variation of LOO.