Book Image

Machine Learning in Java - Second Edition

By : AshishSingh Bhatia, Bostjan Kaluza
Book Image

Machine Learning in Java - Second Edition

By: AshishSingh Bhatia, Bostjan Kaluza

Overview of this book

As the amount of data in the world continues to grow at an almost incomprehensible rate, being able to understand and process data is becoming a key differentiator for competitive organizations. Machine learning applications are everywhere, from self-driving cars, spam detection, document search, and trading strategies, to speech recognition. This makes machine learning well-suited to the present-day era of big data and Data Science. The main challenge is how to transform data into actionable knowledge. Machine Learning in Java will provide you with the techniques and tools you need. You will start by learning how to apply machine learning methods to a variety of common tasks including classification, prediction, forecasting, market basket analysis, and clustering. The code in this book works for JDK 8 and above, the code is tested on JDK 11. Moving on, you will discover how to detect anomalies and fraud, and ways to perform activity recognition, image recognition, and text analysis. By the end of the book, you will have explored related web resources and technologies that will help you take your learning to the next level. By applying the most effective machine learning methods to real-world problems, you will gain hands-on experience that will transform the way you think about data.
Table of Contents (13 chapters)

Basic Naive Bayes classifier baseline

As per the rules of the challenge, the participants had to outperform the basic Naive Bayes classifier in order to qualify for prizes, which makes an assumption that features are independent (refer to Chapter 1, Applied Machine Learning Quick Start).

The KDD Cup organizers ran the vanilla Naive Bayes classifier, without any feature selection or hyperparameter adjustments. For the large dataset, the overall scores of the Naive Bayes on the test set were as follows:

  • Churn problem: AUC = 0.6468
  • Appetency problem: AUC = 0.6453
  • Upselling problem: AUC=0.7211

Note that the baseline results are only reported for the large dataset. Moreover, while both the training and testing datasets are provided at the KDD Cup site, the actual true labels for the test set are not provided. Therefore, when we process the data with our models, there is no way to...