Book Image

Getting Started with Elastic Stack 8.0

By : Asjad Athick
Book Image

Getting Started with Elastic Stack 8.0

By: Asjad Athick

Overview of this book

The Elastic Stack helps you work with massive volumes of data to power use cases in the search, observability, and security solution areas. This three-part book starts with an introduction to the Elastic Stack with high-level commentary on the solutions the stack can be leveraged for. The second section focuses on each core component, giving you a detailed understanding of the component and the role it plays. You’ll start by working with Elasticsearch to ingest, search, analyze, and store data for your use cases. Next, you’ll look at Logstash, Beats, and Elastic Agent as components that can collect, transform, and load data. Later chapters help you use Kibana as an interface to consume Elastic solutions and interact with data on Elasticsearch. The last section explores the three main use cases offered on top of the Elastic Stack. You’ll start with a full-text search and look at real-world outcomes powered by search capabilities. Furthermore, you’ll learn how the stack can be used to monitor and observe large and complex IT environments. Finally, you’ll understand how to detect, prevent, and respond to security threats across your environment. The book ends by highlighting architecture best practices for successful Elastic Stack deployments. By the end of this book, you’ll be able to implement the Elastic Stack and derive value from it.
Table of Contents (18 chapters)
1
Section 1: Core Components
4
Section 2: Working with the Elastic Stack
12
Section 3: Building Solutions with the Elastic Stack

Searching for data

Now that we understand some of the core aspects of Elasticsearch (shards, indices, index mappings/settings, nodes, and more), let's put it all together by ingesting a sample dataset and searching for data.

Indexing sample logs

Follow these steps to ingest some Apache web access logs into Elasticsearch:

  1. Navigate to the Chapter3/searching-for-data directory in the code repository for this book. Inspect the web.log file to see the raw data that we are going to load into Elasticsearch for querying:
    head web.log
  2. A Bash script called load.sh has been provided for loading two items into your Elasticsearch cluster:

(a) An index template called web-logs-template that defines the index mappings and settings that are compliant with the Elastic Common Schema:

cat web-logs-template.json

(b) An ingest pipeline called web-logs-pipeline that parses and transforms logs from your dataset into the Elastic Common Schema:

cat web-logs-pipeline.json...