Book Image

Apache Solr for Indexing Data

Book Image

Apache Solr for Indexing Data

Overview of this book

Apache Solr is a widely used, open source enterprise search server that delivers powerful indexing and searching features. These features help fetch relevant information from various sources and documentation. Solr also combines with other open source tools such as Apache Tika and Apache Nutch to provide more powerful features. This fast-paced guide starts by helping you set up Solr and get acquainted with its basic building blocks, to give you a better understanding of Solr indexing. You’ll quickly move on to indexing text and boosting the indexing time. Next, you’ll focus on basic indexing techniques, various index handlers designed to modify documents, and indexing a structured data source through Data Import Handler. Moving on, you will learn techniques to perform real-time indexing and atomic updates, as well as more advanced indexing techniques such as de-duplication. Later on, we’ll help you set up a cluster of Solr servers that combine fault tolerance and high availability. You will also gain insights into working scenarios of different aspects of Solr and how to use Solr with e-commerce data. By the end of the book, you will be competent and confident working with indexing and will have a good knowledge base to efficiently program elements.
Table of Contents (18 chapters)
Apache Solr for Indexing Data
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Introducing analyzers


To make us able to search effectively and efficiently, Solr splits text into tokens during indexing as well as during search (query time). Solr does all of this with the help of its three main components: analyzers, tokenizers, and filters. Analyzers are used during both indexing and searching. An analyzer examines the text of fields and the generated token stream with the help of tokenizers. Then, filters examine the stream of tokens and perform filtering jobs of any one of these: keeping them, discarding them, or creating new tokens. Tokenizers and filters might be combined in the form of pipelines or chains such that the output of one is the input of the other. Such a sequence of tokenizers and filters is called an analyzer, and the resulting output of the analyzer is used to match search queries or build indices. Let's see how we can use these components in Solr and implement them.

Analyzers are core components that preprocess input text at indexing and search time...