Book Image

Elasticsearch 8.x Cookbook - Fifth Edition

By : Alberto Paro
Book Image

Elasticsearch 8.x Cookbook - Fifth Edition

By: Alberto Paro

Overview of this book

Elasticsearch is a Lucene-based distributed search engine at the heart of the Elastic Stack that allows you to index and search unstructured content with petabytes of data. With this updated fifth edition, you'll cover comprehensive recipes relating to what's new in Elasticsearch 8.x and see how to create and run complex queries and analytics. The recipes will guide you through performing index mapping, aggregation, working with queries, and scripting using Elasticsearch. You'll focus on numerous solutions and quick techniques for performing both common and uncommon tasks such as deploying Elasticsearch nodes, using the ingest module, working with X-Pack, and creating different visualizations. As you advance, you'll learn how to manage various clusters, restore data, and install Kibana to monitor a cluster and extend it using a variety of plugins. Furthermore, you'll understand how to integrate your Java, Scala, Python, and big data applications such as Apache Spark and Pig with Elasticsearch and create efficient data applications powered by enhanced functionalities and custom plugins. By the end of this Elasticsearch cookbook, you'll have gained in-depth knowledge of implementing the Elasticsearch architecture and be able to manage, search, and store data efficiently and effectively using Elasticsearch.
Table of Contents (20 chapters)

Installing Apache Spark

To use Apache Spark, we need to install it. The process is very easy because its requirements are not the traditional Hadoop ones that require Apache ZooKeeper and Hadoop Distributed File System (HDFS).

Apache Spark can work in a standalone node installation similar to Elasticsearch.

Getting ready

You need a Java virtual machine installed. Generally, version 8.x or above is used. The maximum Java version supported by Apache Spark is 11.x.

How to do it...

To install Apache Spark, we will perform the following steps:

  1. Download a binary distribution from https://spark.apache.org/downloads.html. For generic usage, I would suggest that you download a standard version using the following request:
    wget https://dlcdn.apache.org/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz 
  2. Now, we can extract the Spark distribution using tar, as follows:
    tar xfvz spark-3.2.1-bin-hadoop3.2.tgz 
  3. Now, we can test whether Apache Spark is working by executing...