Book Image

Scaling Big Data with Hadoop and Solr, Second Edition

By : Hrishikesh Vijay Karambelkar
Book Image

Scaling Big Data with Hadoop and Solr, Second Edition

By: Hrishikesh Vijay Karambelkar

Overview of this book

Table of Contents (13 chapters)
Scaling Big Data with Hadoop and Solr Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Preface

With the growth of information assets in enterprises, the need to build a rich, scalable search application that can handle a lot of data has becomes critical. Today, Apache Solr is one of the most widely adapted, scalable, feature-rich, and best performing open source search application servers. Similarly, Apache Hadoop is one of the most popular Big Data platforms and is widely preferred by many organizations to store and process large datasets.

Scaling Big Data with Hadoop and Solr, Second Edition is intended to help its readers build a high performance Big Data enterprise search engine with the help of Hadoop and Solr. This starts with a basic understanding of Hadoop and Solr, and gradually develops into building an efficient, scalable enterprise search repository for Big Data, using various techniques throughout the practical chapters.

What this book covers

Chapter 1, Processing Big Data Using Hadoop and MapReduce, introduces you to Apache Hadoop and its ecosystem, HDFS and MapReduce. You will also learn how to write MapReduce programs, configure Hadoop clusters, configuration files, and administrate your cluster.

Chapter 2, Understanding Apache Solr, introduces you to Apache Solr. It explains how you can configure the Solr instance, how to create indexes and load your data in the Solr repository, and how you can use Solr effectively to search. It also discusses interesting features of Apache Solr.

Chapter 3, Enabling Distributed Search using Apache Solr, takes you through various aspects of enabling Solr for a distributed search, including with the use of SolrCloud. It also explains how Apache Solr and Big Data can come together to perform a scalable search.

Chapter 4, Big Data Search Using Hadoop and Its Ecosystem, explains the NoSQL and concepts of distributed search. It then explains how to use different algorithms for Big Data search, and includes covering shards and indexing. It also talks about integration with Cassandra, Apache Blur, Storm, and search analytics.

Chapter 5, Scaling Search Performance, will guide you in improving the performance of searches with Scaling Big Data. It covers different levels of optimization that you can perform on your Big Data search instance as the data keeps growing. It discusses different performance improvement techniques that can be implemented by users for the purposes of deployment.

Appendix, Use Cases for Big Data Search, discusses some of the most important business cases for high-level enterprise search architecture with Big Data and Solr.

What you need for this book

This book discusses different approaches; each approach needs a different set of software. Based on the requirements for building search applications, the respective software can be used. However, to run a minimal setup, you need the following software:

  • JDK 1.8 and above

  • Solr 4.10 and above

  • Hadoop 2.5 and above

Who this book is for

Scaling Big Data with Hadoop and Solr, Second Edition provides step-by-step guidance for any user who intends to build high-performance, scalable, enterprise-ready search application servers. This book will appeal to developers, architects, and designers who wish to understand Apache Solr/Hadoop and its ecosystem, design an enterprise-ready application, and optimize it based on their requirements. This book enables you to build a scalable search without prior knowledge of Solr or Hadoop, with practical examples and case studies.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "By deleting the DFS data folder, you can find the location from hdfs-site.xml and restart the cluster."

A block of code is set as follows:

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://master-server:9000</value>
  </property>
</configuration>

Any command-line input or output is written as follows:

$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "You can validate the content created by your new MongoDB DIH by accessing the Solr Admin page, and running a query".

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to , and mention the book title via the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at if you are having a problem with any aspect of the book, and we will do our best to address it.