Book Image

Splunk Best Practices

Book Image

Splunk Best Practices

Overview of this book

This book will give you an edge over others through insights that will help you in day-to-day instances. When you're working with data from various sources in Splunk and performing analysis on this data, it can be a bit tricky. With this book, you will learn the best practices of working with Splunk. You'll learn about tools and techniques that will ease your life with Splunk, and will ultimately save you time. In some cases, it will adjust your thinking of what Splunk is, and what it can and cannot do. To start with, you'll get to know the best practices to get data into Splunk, analyze data, and package apps for distribution. Next, you'll discover the best practices in logging, operations, knowledge management, searching, and reporting. To finish off, we will teach you how to troubleshoot Splunk searches, as well as deployment, testing, and development with Splunk.
Table of Contents (16 chapters)

Manipulating raw data (pre-indexing)


There are some advantages and disadvantages to using a Heavy Forwarder to deliver cooked data to the indexer. The primary advantage this gives us is better performance at search time. If we extract fields and break events before we write to disk, then it's less resource-intensive on the search head, and the indexer at search time.

The primary disadvantage only comes when we do these instances on the indexer itself, instead of offloading these types of operation to a Heavy Forwarder. The indexer will have to work harder to scrub the data before it writes it to disk, if a Heavy Forwarder is not the intermediary.

Routing events to separate indexes

Now that we have our Heavy Forwarder, we can start collecting data. In the first case, let's use a shared firewall file, to which multiple devices write their logs.

On Linux, it's pretty easy to add the shared mount to our Heavy Forwarder, and then we can just configure our Heavy Forwarder to ingest that file. That...