Book Image

Elasticsearch 5.x Cookbook - Third Edition

By : Alberto Paro
Book Image

Elasticsearch 5.x Cookbook - Third Edition

By: Alberto Paro

Overview of this book

Elasticsearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. This book is your one-stop guide to master the complete Elasticsearch ecosystem. We’ll guide you through comprehensive recipes on what’s new in Elasticsearch 5.x, showing you how to create complex queries and analytics, and perform index mapping, aggregation, and scripting. Further on, you will explore the modules of Cluster and Node monitoring and see ways to back up and restore a snapshot of an index. You will understand how to install Kibana to monitor a cluster and also to extend Kibana for plugins. Finally, you will also see how you can integrate your Java, Scala, Python, and Big Data applications such as Apache Spark and Pig with Elasticsearch, and add enhanced functionalities with custom plugins. By the end of this book, you will have an in-depth knowledge of the implementation of the Elasticsearch architecture and will be able to manage data efficiently and effectively with Elasticsearch.
Table of Contents (25 chapters)
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Dedication
Preface

Put an ingest pipeline


The power of the pipeline definition is the ability for to be updated and created without a node restart (compared to Logstash). The definition is stored in a cluster state via the put pipeline API.

After having defined a pipeline, we need to provide it to the Elasticsearch cluster.

Getting ready

You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 2, Downloading and Setup.

To execute curl via the command line, you need to install curl for your operative system.

How to do it...

To store or update an ingestion pipeline in Elasticsearch, we will perform the following steps:

  1. We can store the ingest pipeline via a PUT call:

            curl -XPUT 'http://127.0.0.1:9200/_ingest/pipeline/add-user- 
            john' -d '{ 
              "description" : "Add user john field", 
              "processors" : [  
                 { 
                  "set" : { 
                    "field": "user", 
                    "value": "john...