Book Image

Getting Started with Elastic Stack 8.0

By : Asjad Athick
Book Image

Getting Started with Elastic Stack 8.0

By: Asjad Athick

Overview of this book

The Elastic Stack helps you work with massive volumes of data to power use cases in the search, observability, and security solution areas. This three-part book starts with an introduction to the Elastic Stack with high-level commentary on the solutions the stack can be leveraged for. The second section focuses on each core component, giving you a detailed understanding of the component and the role it plays. You’ll start by working with Elasticsearch to ingest, search, analyze, and store data for your use cases. Next, you’ll look at Logstash, Beats, and Elastic Agent as components that can collect, transform, and load data. Later chapters help you use Kibana as an interface to consume Elastic solutions and interact with data on Elasticsearch. The last section explores the three main use cases offered on top of the Elastic Stack. You’ll start with a full-text search and look at real-world outcomes powered by search capabilities. Furthermore, you’ll learn how the stack can be used to monitor and observe large and complex IT environments. Finally, you’ll understand how to detect, prevent, and respond to security threats across your environment. The book ends by highlighting architecture best practices for successful Elastic Stack deployments. By the end of this book, you’ll be able to implement the Elastic Stack and derive value from it.
Table of Contents (18 chapters)
1
Section 1: Core Components
4
Section 2: Working with the Elastic Stack
12
Section 3: Building Solutions with the Elastic Stack

Summary

In this chapter, we understood how data in Elasticsearch can be aggregated for statistical insights. We explored how metric and bucket aggregations help slice and dice a large dataset to analyze data for insights.

We also looked at how ingest pipelines can be used to manipulate and transform incoming data to prepare it for use cases on Elasticsearch. We explored a range of common use cases for ingest pipelines in this section.

Lastly, we looked at how Watcher can be used to implement alerting and response actions to changes in data. Again, we explored a range of common alerting use cases in this section.

In the next chapter, we will dive into getting started with and using machine learning jobs to find anomalies in our data, run inference for new documents using the inference ingest processor, and run transformation jobs to pivot incoming datasets for machine learning.