Book Image

Mastering Elasticsearch 5.x - Third Edition

Book Image

Mastering Elasticsearch 5.x - Third Edition

Overview of this book

Elasticsearch is a modern, fast, distributed, scalable, fault tolerant, and open source search and analytics engine. Elasticsearch leverages the capabilities of Apache Lucene, and provides a new level of control over how you can index and search even huge sets of data. This book will give you a brief recap of the basics and also introduce you to the new features of Elasticsearch 5. We will guide you through the intermediate and advanced functionalities of Elasticsearch, such as querying, indexing, searching, and modifying data. We’ll also explore advanced concepts, including aggregation, index control, sharding, replication, and clustering. We’ll show you the modules of monitoring and administration available in Elasticsearch, and will also cover backup and recovery. You will get an understanding of how you can scale your Elasticsearch cluster to contextualize it and improve its performance. We’ll also show you how you can create your own analysis plugin in Elasticsearch. By the end of the book, you will have all the knowledge necessary to master Elasticsearch and put it to efficient use.
Table of Contents (20 chapters)
Mastering Elasticsearch 5.x - Third Edition
Credits
About the Author
Acknowledgements
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Preprocessing data within Elasticsearch with ingest nodes


We gave you a brief overview about ingest nodes under the Node types in Elasticsearch section of Chapter 8, ElasticSearch Administration. In this section, we are going to cover ingest node functionalities in detail.

Ingest nodes, which are introduced in Elasticsearch 5.0, help in preprocessing the data and enriching them before they are actually indexed. This helps a lot in scenarios where you have to use a custom parser or Logstash for processing documents and enriching them, before sending to Elasticsearch. Now you can do all those things within Elasticsearch itself. This preprocessing is achieved with the help of defining a pipeline and a series of one or more processors. Each processor transforms the document in some way. For example, you can add a new field with a custom value or remove a field completely.

Working with ingest pipeline

An ingest pipeline has the following structure:

{ 
  "description" : "...", 
  "processors...