Book Image

Scaling Big Data with Hadoop and Solr

By : Hrishikesh Vijay Karambelkar
Book Image

Scaling Big Data with Hadoop and Solr

By: Hrishikesh Vijay Karambelkar

Overview of this book

<p>As data grows exponentially day-by-day, extracting information becomes a tedious activity in itself. Technologies like Hadoop are trying to address some of the concerns, while Solr provides high-speed faceted search. Bringing these two technologies together is helping organizations resolve the problem of information extraction from Big Data by providing excellent distributed faceted search capabilities.</p> <p>Scaling Big Data with Hadoop and Solr is a step-by-step guide that helps you build high performance enterprise search engines while scaling data. Starting with the basics of Apache Hadoop and Solr, this book then dives into advanced topics of optimizing search with some interesting real-world use cases and sample Java code.</p> <p>Scaling Big Data with Hadoop and Solr starts by teaching you the basics of Big Data technologies including Hadoop and its ecosystem and Apache Solr. It explains the different approaches of scaling Big Data with Hadoop and Solr, with discussion regarding the applicability, benefits, and drawbacks of each approach. It then walks readers through how sharding and indexing can be performed on Big Data followed by the performance optimization of Big Data search. Finally, it covers some real-world use cases for Big Data scaling.</p> <p>With this book, you will learn everything you need to know to build a distributed enterprise search platform as well as how to optimize this search to a greater extent resulting in maximum utilization of available resources.</p>
Table of Contents (15 chapters)
Scaling Big Data with Hadoop and Solr
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Managing a Hadoop cluster


Once a cluster is launched, administrators should start monitoring the Hadoop cluster. Apache Hadoop provides number of software to manage the cluster; in addition to that there are dedicated open sources as well as third party application tools to do the management of Hadoop cluster.

By default, Hadoop provides two web-based interfaces to monitor its activities. A JobTracker web interface and NameNode web interface. A JobTracker web interface by default runs on a master server (http://localhost:50070) and it provides information such as heap size, cluster usage, and completed jobs. It also provides administrators to drill down further into completed as well as failed jobs. The following screenshot describes the actual instance running in a pseudo distributed mode:

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Similarly, the NameNode interface runs on a master server (http://localhost:50030), and it provides you with information about HDFS. With it, you can browse the current file system in HDFS through the Web; you can see disk usage, its availability, and live data node related information.