Book Image

Advanced Elasticsearch 7.0

By : Wai Tak Wong
Book Image

Advanced Elasticsearch 7.0

By: Wai Tak Wong

Overview of this book

Building enterprise-grade distributed applications and executing systematic search operations call for a strong understanding of Elasticsearch and expertise in using its core APIs and latest features. This book will help you master the advanced functionalities of Elasticsearch and understand how you can develop a sophisticated, real-time search engine confidently. In addition to this, you'll also learn to run machine learning jobs in Elasticsearch to speed up routine tasks. You'll get started by learning to use Elasticsearch features on Hadoop and Spark and make search results faster, thereby improving the speed of query results and enhancing the customer experience. You'll then get up to speed with performing analytics by building a metrics pipeline, defining queries, and using Kibana for intuitive visualizations that help provide decision-makers with better insights. The book will later guide you through using Logstash with examples to collect, parse, and enrich logs before indexing them in Elasticsearch. By the end of this book, you will have comprehensive knowledge of advanced topics such as Apache Spark support, machine learning using Elasticsearch and scikit-learn, and real-time analytics, along with the expertise you need to increase business productivity, perform analytics, and get the very best out of Elasticsearch.
Table of Contents (25 chapters)
Free Chapter
Section 1: Fundamentals and Core APIs
Section 2: Data Modeling, Aggregations Framework, Pipeline, and Data Analytics
Section 3: Programming with the Elasticsearch Client
Section 4: Elastic Stack
Section 5: Advanced Features

Custom analyzers

Elasticsearch gives you a way to customize your analyzer. The first step is to define the analyzer and then use it in the mappings. You must define the analyzer in the index settings. You can then define your analyzer either in an index or in an index template for multiple indices that match the index pattern. Recall that an analyzer must only have one tokenizer and, optionally, many character filters and token filters. Let's create a custom analyzer to extract the tokens that we will use in the next chapter, which contain the following components:

  • tokenizer: Use the char_group tokenizer to have separators such as whitespace, digit, punctuation except for hyphens, end-of-line, symbols, and more.
  • token filter: Use the pattern_replace, lowercase, stemmer, stop, length, and unique filters.

Since the description text will be indexed differently, we need to...