Book Image

Splunk Best Practices

Book Image

Splunk Best Practices

Overview of this book

This book will give you an edge over others through insights that will help you in day-to-day instances. When you're working with data from various sources in Splunk and performing analysis on this data, it can be a bit tricky. With this book, you will learn the best practices of working with Splunk. You'll learn about tools and techniques that will ease your life with Splunk, and will ultimately save you time. In some cases, it will adjust your thinking of what Splunk is, and what it can and cannot do. To start with, you'll get to know the best practices to get data into Splunk, analyze data, and package apps for distribution. Next, you'll discover the best practices in logging, operations, knowledge management, searching, and reporting. To finish off, we will teach you how to troubleshoot Splunk searches, as well as deployment, testing, and development with Splunk.
Table of Contents (16 chapters)

Consolidating indexing/forwarding apps


Oftentimes, there is a plausible reason to consolidate apps that either forward data to Splunk or transform data before it is written to disk. This reduces administrative overhead, and allows a single package to be deployed to all systems that spin up with those criteria.

I will use Hadoop for this example. Let's hypothetically say you have 600 nodes in a Hadoop cluster (all on a Linux platform) on which we would also like to monitor CPU, Memory, and disk metrics. Within that Hadoop system, apps such as Spark or Hive and Hive2 and Platfora each have their own logs and data inputs. Some of these components have Apache web frontends, which will also need to be parsed, but not all nodes will need this.

It takes some magic with the deployment server to make it work, but there is a relatively easy way to do it. We create a consolidated forwarding app (that is, a deployment app) and a consolidated cluster app (that is, an indexing app).

Forwarding apps

These...