Book Image

Machine Learning with the Elastic Stack - Second Edition

By : Rich Collier, Camilla Montonen, Bahaaldine Azarmi
5 (1)
Book Image

Machine Learning with the Elastic Stack - Second Edition

5 (1)
By: Rich Collier, Camilla Montonen, Bahaaldine Azarmi

Overview of this book

Elastic Stack, previously known as the ELK stack, is a log analysis solution that helps users ingest, process, and analyze search data effectively. With the addition of machine learning, a key commercial feature, the Elastic Stack makes this process even more efficient. This updated second edition of Machine Learning with the Elastic Stack provides a comprehensive overview of Elastic Stack's machine learning features for both time series data analysis as well as for classification, regression, and outlier detection. The book starts by explaining machine learning concepts in an intuitive way. You'll then perform time series analysis on different types of data, such as log files, network flows, application metrics, and financial data. As you progress through the chapters, you'll deploy machine learning within Elastic Stack for logging, security, and metrics. Finally, you'll discover how data frame analysis opens up a whole new set of use cases that machine learning can help you with. By the end of this Elastic Stack book, you'll have hands-on machine learning and Elastic Stack experience, along with the knowledge you need to incorporate machine learning in your distributed search and data analysis platform.
Table of Contents (19 chapters)
1
Section 1 – Getting Started with Machine Learning with Elastic Stack
4
Section 2 – Time Series Analysis – Anomaly Detection and Forecasting
11
Section 3 – Data Frame Analysis

Using custom rules and filters to your advantage

While the anomaly detection jobs are incredibly useful, they are also agnostic to the domain and to the relevance of the raw data. In other words, the unsupervised machine learning algorithms do not know that a tenfold increase in CPU utilization (from 1% to 10%, for example) may not be that interesting to the proper operation of an application even though it may be statistically anomalous/unlikely in the scenario. Likewise, the anomaly detection jobs treat every entity analyzed equally, but the user might want to disavow results for a certain IP address or user ID, since the user knows that anomalies found for these entities are not desired or useful. The usage of custom rules and filters allows the user to inject domain knowledge into the anomaly detection job configuration, thereby having a fair amount of control as to what gets deemed or marked anomalous – or even if entities get considered part of the modeling process in...