Book Image

Machine Learning with LightGBM and Python

By : Andrich van Wyk
3 (1)
Book Image

Machine Learning with LightGBM and Python

3 (1)
By: Andrich van Wyk

Overview of this book

Machine Learning with LightGBM and Python is a comprehensive guide to learning the basics of machine learning and progressing to building scalable machine learning systems that are ready for release. This book will get you acquainted with the high-performance gradient-boosting LightGBM framework and show you how it can be used to solve various machine-learning problems to produce highly accurate, robust, and predictive solutions. Starting with simple machine learning models in scikit-learn, you’ll explore the intricacies of gradient boosting machines and LightGBM. You’ll be guided through various case studies to better understand the data science processes and learn how to practically apply your skills to real-world problems. As you progress, you’ll elevate your software engineering skills by learning how to build and integrate scalable machine-learning pipelines to process data, train models, and deploy them to serve secure APIs using Python tools such as FastAPI. By the end of this book, you’ll be well equipped to use various -of-the-art tools that will help you build production-ready systems, including FLAML for AutoML, PostgresML for operating ML pipelines using Postgres, high-performance distributed training and serving via Dask, and creating and running models in the Cloud with AWS Sagemaker.
Table of Contents (17 chapters)
1
Part 1: Gradient Boosting and LightGBM Fundamentals
6
Part 2: Practical Machine Learning with LightGBM
10
Part 3: Production-ready Machine Learning with LightGBM

Advanced boosting algorithm – DART

DART is an extension of the standard GBDT algorithm discussed in the previous section [4]. DART employs dropouts, a technique from deep learning (DL), to avoid overfitting by the decision tree ensemble. The extension is straightforward and consists of two parts. First, when fitting the next prediction tree, M n+1(x), which consists of the scaled sum of all previous trees M nM 1, a random subset of the previous trees is instead used, with other trees dropped from the sum. The p drop parameter controls the probability of a previous tree being included. The second part of the DART algorithm is to apply additional scaling of the contribution of the new tree. Let k be the number of trees dropped when the new tree, M n+1, was calculated. Since M n+1 was calculated without the contribution of those k trees when updating our prediction, F n+1, which includes all trees, the prediction overshoots. Therefore...