Book Image

Machine Learning with LightGBM and Python

By : Andrich van Wyk
3 (1)
Book Image

Machine Learning with LightGBM and Python

3 (1)
By: Andrich van Wyk

Overview of this book

Machine Learning with LightGBM and Python is a comprehensive guide to learning the basics of machine learning and progressing to building scalable machine learning systems that are ready for release. This book will get you acquainted with the high-performance gradient-boosting LightGBM framework and show you how it can be used to solve various machine-learning problems to produce highly accurate, robust, and predictive solutions. Starting with simple machine learning models in scikit-learn, you’ll explore the intricacies of gradient boosting machines and LightGBM. You’ll be guided through various case studies to better understand the data science processes and learn how to practically apply your skills to real-world problems. As you progress, you’ll elevate your software engineering skills by learning how to build and integrate scalable machine-learning pipelines to process data, train models, and deploy them to serve secure APIs using Python tools such as FastAPI. By the end of this book, you’ll be well equipped to use various -of-the-art tools that will help you build production-ready systems, including FLAML for AutoML, PostgresML for operating ML pipelines using Postgres, high-performance distributed training and serving via Dask, and creating and running models in the Cloud with AWS Sagemaker.
Table of Contents (17 chapters)
1
Part 1: Gradient Boosting and LightGBM Fundamentals
6
Part 2: Practical Machine Learning with LightGBM
10
Part 3: Production-ready Machine Learning with LightGBM

An overview of XGBoost

XGBoost, short for eXtreme Gradient Boosting, is a widely popular open source gradient boosting library with similar goals and functionality to LightGBM. XGBoost is older than LightGBM and was developed by Tianqi Chen and initially released in 2014 [1].

At its core, XGBoost implements GBDTs and supports building them highly efficiently. Some of the main features of XGBoost are as follows:

  • Regularization: XGBoost incorporates both L1 and L2 regularization to avoid overfitting
  • Sparsity awareness: XGBoost efficiently handles sparse data and missing values, automatically learning the best imputation strategy during training
  • Parallelization: The library employs parallel and distributed computing techniques to train multiple trees simultaneously, significantly reducing training time
  • Early stopping: XGBoost provides an option to halt the training process if there is no significant improvement in the model’s performance, improving performance...