Book Image

Apache Solr for Indexing Data

Book Image

Apache Solr for Indexing Data

Overview of this book

Apache Solr is a widely used, open source enterprise search server that delivers powerful indexing and searching features. These features help fetch relevant information from various sources and documentation. Solr also combines with other open source tools such as Apache Tika and Apache Nutch to provide more powerful features. This fast-paced guide starts by helping you set up Solr and get acquainted with its basic building blocks, to give you a better understanding of Solr indexing. You’ll quickly move on to indexing text and boosting the indexing time. Next, you’ll focus on basic indexing techniques, various index handlers designed to modify documents, and indexing a structured data source through Data Import Handler. Moving on, you will learn techniques to perform real-time indexing and atomic updates, as well as more advanced indexing techniques such as de-duplication. Later on, we’ll help you set up a cluster of Solr servers that combine fault tolerance and high availability. You will also gain insights into working scenarios of different aspects of Solr and how to use Solr with e-commerce data. By the end of the book, you will be competent and confident working with indexing and will have a good knowledge base to efficiently program elements.
Table of Contents (18 chapters)
Apache Solr for Indexing Data
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Understanding soft commit, optimize, and hard commit


Solr provides us a Near-Real-Time (NRT) search, which makes documents available for searching just after they have been indexed in Solr. Additions or updates to documents are seen nearly in real-time after we index them in Solr. This near-real-time search can be done by using a soft commit (available in Solr 4.0+), which avoids the high cost of calling fsync, and it will flush the index data into a stable storage so that it can be retrieved in the event of a JVM crash.

An optimize, on the other hand, will cause all index segments to be merged into a single segment first and then reindex them. It's just like the defragmentation that we do on an HDD, which reindexes and frees up space. Normally, index segments are merged over time as specified in the merge policy, but this happens immediately when forced using the optimize command.

Let's see how we can use soft commit and optimize in Solr. We'll use our musicCatalog example and create a new...