Book Image

Scaling Big Data with Hadoop and Solr

By : Hrishikesh Vijay Karambelkar
Book Image

Scaling Big Data with Hadoop and Solr

By: Hrishikesh Vijay Karambelkar

Overview of this book

<p>As data grows exponentially day-by-day, extracting information becomes a tedious activity in itself. Technologies like Hadoop are trying to address some of the concerns, while Solr provides high-speed faceted search. Bringing these two technologies together is helping organizations resolve the problem of information extraction from Big Data by providing excellent distributed faceted search capabilities.</p> <p>Scaling Big Data with Hadoop and Solr is a step-by-step guide that helps you build high performance enterprise search engines while scaling data. Starting with the basics of Apache Hadoop and Solr, this book then dives into advanced topics of optimizing search with some interesting real-world use cases and sample Java code.</p> <p>Scaling Big Data with Hadoop and Solr starts by teaching you the basics of Big Data technologies including Hadoop and its ecosystem and Apache Solr. It explains the different approaches of scaling Big Data with Hadoop and Solr, with discussion regarding the applicability, benefits, and drawbacks of each approach. It then walks readers through how sharding and indexing can be performed on Big Data followed by the performance optimization of Big Data search. Finally, it covers some real-world use cases for Big Data scaling.</p> <p>With this book, you will learn everything you need to know to build a distributed enterprise search platform as well as how to optimize this search to a greater extent resulting in maximum utilization of available resources.</p>
Table of Contents (15 chapters)
Scaling Big Data with Hadoop and Solr
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

solrconfig.xml


Chapter 2, Understanding Solr, of this book explains the solrconfig.xml file in detail. We will look at the sample configuration in this section for log management. In the Solr configuration, interesting part will be the introduction of facets. For log management, you may consider the following facets to make overall browsing interesting:

Facet

Description

Timeline based

With this facet, users will be able to effectively filter their search based on the time. For example, options such as past 1 hour, past 1 week, and so on.

Levels of log

Levels of log provide you with the severity: for example, SEVERE, ERROR, INFO, and so on.

Host

Since this system provides a common search for multiple machines, this facet can provide filtering criteria if an administrator is looking for something specific

User

If an administrator knows about the user, extracting user information from log can add better filtering through the user facet

Application

Similar to host, administrators can filter the logs based on an application using this facet

Severity

Severity can be another filtering criteria; most severe errors can be filtered with this facet

In addition to this, you will also use features of highlighting logs, spelling correction, suggestions (MoreLikeThis), and so on. The following screenshot shows a sample facet sidebar of Apache Solr to give us a better understanding over how it may look:

The following sample configuration for Solr shows different facets and other information when you access/browse:

Similarly, the following configuration shows a timeline-based facet, and features such as highlighting and spell check: