Book Image

Machine Learning with the Elastic Stack - Second Edition

By : Rich Collier, Camilla Montonen, Bahaaldine Azarmi
5 (1)
Book Image

Machine Learning with the Elastic Stack - Second Edition

5 (1)
By: Rich Collier, Camilla Montonen, Bahaaldine Azarmi

Overview of this book

Elastic Stack, previously known as the ELK stack, is a log analysis solution that helps users ingest, process, and analyze search data effectively. With the addition of machine learning, a key commercial feature, the Elastic Stack makes this process even more efficient. This updated second edition of Machine Learning with the Elastic Stack provides a comprehensive overview of Elastic Stack's machine learning features for both time series data analysis as well as for classification, regression, and outlier detection. The book starts by explaining machine learning concepts in an intuitive way. You'll then perform time series analysis on different types of data, such as log files, network flows, application metrics, and financial data. As you progress through the chapters, you'll deploy machine learning within Elastic Stack for logging, security, and metrics. Finally, you'll discover how data frame analysis opens up a whole new set of use cases that machine learning can help you with. By the end of this Elastic Stack book, you'll have hands-on machine learning and Elastic Stack experience, along with the knowledge you need to incorporate machine learning in your distributed search and data analysis platform.
Table of Contents (19 chapters)
1
Section 1 – Getting Started with Machine Learning with Elastic Stack
4
Section 2 – Time Series Analysis – Anomaly Detection and Forecasting
11
Section 3 – Data Frame Analysis

Summary

To conclude the chapter, let's remind ourselves of the main features of the second unsupervised learning feature in the Elastic Stack: outlier detection. Outlier detection can be used to detect unusual data points in single or multidimensional datasets.

The algorithm is based on an ensemble of four separate measures: two distance-based measures based on kth-nearest neighbors and two density-based measures. The combination of these measures captures how far a given data point is from its neighbors and from the general mass of data in the dataset. This unusualness is captured in a numerical outlier score that ranges from 0 to 1. The closer a given data point scores to 1, the more unusual it is in the dataset.

In addition to the outlier score, for each feature or field of a point, we compute a quantity known as the feature influence. The higher the feature influence for a given field, the more that field is responsible for a given point being unusual. These feature...