Chapter 1: Evaluating Machine Learning Models
Machine Learning (ML) models need to be thoroughly evaluated to ensure they will work in production. We have to ensure the model is not memorizing the training data and also ensure it learns enough from the given training data. Choosing the appropriate evaluation method is also critical when we want to perform hyperparameter tuning at a later stage.
In this chapter, we'll learn about all the important things we need to know when it comes to evaluating ML models. First, we need to understand the concept of overfitting. Then, we will look at the idea of splitting data into train, validation, and test sets. Additionally, we'll learn about the difference between random and stratified splits and when to use each of them.
We'll discuss the concept of cross-validation and its numerous variations of strategy: k-fold repeated k-fold, Leave One Out (LOO), Leave P Out (LPO), and a specific strategy when dealing with time-series data, called time-series cross-validation. We'll also learn how to implement each of the evaluation strategies using the Scikit-Learn package.
By the end of this chapter, you will have a good understanding of why choosing a proper evaluation strategy is critical in the ML model development life cycle. Also, you will be aware of numerous evaluation strategies and will be able to choose the most appropriate one for your situation. Furthermore, you will also be able to implement each of the evaluation strategies using the Scikit-Learn package.
In this chapter, we're going to cover the following main topics:
- Understanding the concept of overfitting
- Creating training, validation, and test sets
- Exploring random and stratified split
- Discovering k-fold cross-validation
- Discovering repeated k-fold cross-validation
- Discovering LOO cross-validation
- Discovering LPO cross-validation
- Discovering time-series cross-validation