Book Image

Advanced Elasticsearch 7.0

By : Wai Tak Wong
Book Image

Advanced Elasticsearch 7.0

By: Wai Tak Wong

Overview of this book

Building enterprise-grade distributed applications and executing systematic search operations call for a strong understanding of Elasticsearch and expertise in using its core APIs and latest features. This book will help you master the advanced functionalities of Elasticsearch and understand how you can develop a sophisticated, real-time search engine confidently. In addition to this, you'll also learn to run machine learning jobs in Elasticsearch to speed up routine tasks. You'll get started by learning to use Elasticsearch features on Hadoop and Spark and make search results faster, thereby improving the speed of query results and enhancing the customer experience. You'll then get up to speed with performing analytics by building a metrics pipeline, defining queries, and using Kibana for intuitive visualizations that help provide decision-makers with better insights. The book will later guide you through using Logstash with examples to collect, parse, and enrich logs before indexing them in Elasticsearch. By the end of this book, you will have comprehensive knowledge of advanced topics such as Apache Spark support, machine learning using Elasticsearch and scikit-learn, and real-time analytics, along with the expertise you need to increase business productivity, perform analytics, and get the very best out of Elasticsearch.
Table of Contents (25 chapters)
Free Chapter
1
Section 1: Fundamentals and Core APIs
8
Section 2: Data Modeling, Aggregations Framework, Pipeline, and Data Analytics
13
Section 3: Programming with the Elasticsearch Client
16
Section 4: Elastic Stack
20
Section 5: Advanced Features

An analyzer's components

The purpose of an analyzer is to generate terms from a document and to create inverted indexes (such as lists of unique words and the document IDs they appear in, or a list of word frequencies). An analyzer must have only one tokenizer and, optionally, as many character filters and token filters as the user wants. Whether it is a built-in analyzer or a custom analyzer, analyzers are just an aggregation of the processes of these three building blocks, as illustrated in the following diagram:

Recall from Chapter 1, Overview of Elasticsearch 7, (you can refer to the Analyzer section) that a standard analyzer is composed of a standard tokenizer and a lowercase token filter. A standard tokenizer provides grammar-based tokenization, while a lowercase token filter normalizes tokens to lowercase. Let's suppose that the input string is an HTML text string...