Book Image

Mastering Java for Data Science

By : Alexey Grigorev
Book Image

Mastering Java for Data Science

By: Alexey Grigorev

Overview of this book

Java is the most popular programming language, according to the TIOBE index, and it is a typical choice for running production systems in many companies, both in the startup world and among large enterprises. Not surprisingly, it is also a common choice for creating data science applications: it is fast and has a great set of data processing tools, both built-in and external. What is more, choosing Java for data science allows you to easily integrate solutions with existing software, and bring data science into production with less effort. This book will teach you how to create data science applications with Java. First, we will revise the most important things when starting a data science application, and then brush up the basics of Java and machine learning before diving into more advanced topics. We start by going over the existing libraries for data processing and libraries with machine learning algorithms. After that, we cover topics such as classification and regression, dimensionality reduction and clustering, information retrieval and natural language processing, and deep learning and big data. Finally, we finish the book by talking about the ways to deploy the model and evaluate it in production settings.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Gradient Boosting Machines and XGBoost


Gradient Boosting Machines (GBM) is an ensembling algorithm. The main idea behind GBM is to take some base model and then  fit this model, over and over, to the data, gradually improving the performance. It is different from Random Forest models because GBM tries to improve the results at each step, while random forest builds multiple independent models and takes their average.

The main idea behind GBM can be best illustrated with a Linear Regression example. To fit several linear regressions to data, we can do the following:

  1. Fit the base model to the original data.
  2. Take the difference between the target value and the prediction of the first model (we call it the residuals of Step 1) and use this for training the second model.
  3. Take the difference between the residuals of step 1 and predictions of step 2 (this is the residuals of Step 2) and fit the 3rd model.
  4. Continue until you train N models.
  5. For predicting, sum up the predictions of all individual models...