Book Image

Elasticsearch 8.x Cookbook - Fifth Edition

By : Alberto Paro
Book Image

Elasticsearch 8.x Cookbook - Fifth Edition

By: Alberto Paro

Overview of this book

Elasticsearch is a Lucene-based distributed search engine at the heart of the Elastic Stack that allows you to index and search unstructured content with petabytes of data. With this updated fifth edition, you'll cover comprehensive recipes relating to what's new in Elasticsearch 8.x and see how to create and run complex queries and analytics. The recipes will guide you through performing index mapping, aggregation, working with queries, and scripting using Elasticsearch. You'll focus on numerous solutions and quick techniques for performing both common and uncommon tasks such as deploying Elasticsearch nodes, using the ingest module, working with X-Pack, and creating different visualizations. As you advance, you'll learn how to manage various clusters, restore data, and install Kibana to monitor a cluster and extend it using a variety of plugins. Furthermore, you'll understand how to integrate your Java, Scala, Python, and big data applications such as Apache Spark and Pig with Elasticsearch and create efficient data applications powered by enhanced functionalities and custom plugins. By the end of this Elasticsearch cookbook, you'll have gained in-depth knowledge of implementing the Elasticsearch architecture and be able to manage, search, and store data efficiently and effectively using Elasticsearch.
Table of Contents (20 chapters)

Mapping a Percolator field

The Percolator is a special type of field that makes it possible to store an Elasticsearch query inside the field and use it in a percolator query.

The Percolator can be used to detect all the queries that match a document.

Getting ready

You will need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe of Chapter 1Getting Started.

To execute the commands in this recipe, you can use any HTTP client, such as curl (https://curl.haxx.se/), Postman (https://www.getpostman.com/), or similar. I suggest using the Kibana console, which provides code completion and better character escaping for Elasticsearch.

How to do it...

To map a percolator field, follow these steps:

  1. We want to create a Percolator that matches some text in a body field. We can define the mapping like so:
    PUT test-percolator
    { "mappings": {
        "properties": {
          "query": { "type": "percolator"  },
          "body": { "type": "text" }
    } } }
  2. Now, we can store a document with a percolator query inside it, as follows:
    PUT test-percolator/_doc/1?refresh
    { "query": { "match": { "body": "quick brown fox"  }}}
  3. Now, let's execute a search on it, as shown in the following code:
    GET test-percolator/_search
    { "query": {
        "percolate": {
          "field": "query",
          "document": { "body": "fox jumps over the lazy dog" } } } }
  4. This will result in us retrieving the hits of the stored document, as follows:
    {
       ... truncated...
       "hits" : [
         {
         "_index" : "test-percolator", "_id" : "1",
         "_score" : 0.13076457,
         "_source" : {
             "query" : {
                 "match" : { "body" : "quick brown fox" }
             }
         },
         "fields" : { "_percolator_document_slot" : [0]       } } ] } }

How it works…

The percolator field stores an Elasticsearch query inside it.

Because all the Percolators are cached and are always active for performances, all the fields that are required in the query must be defined in the mapping of the document.

Since all the queries in all the Percolator documents will be executed against every document, for the best performance, the query inside the Percolator must be optimized so that they're executed quickly inside the percolator query.