Book Image

Improving Your Splunk Skills

By : James D. Miller, Paul R. Johnson, Josh Diakun, Derek Mock
Book Image

Improving Your Splunk Skills

By: James D. Miller, Paul R. Johnson, Josh Diakun, Derek Mock

Overview of this book

Splunk makes it easy for you to take control of your data and drive your business with the cutting edge of operational intelligence and business analytics. Through this Learning Path, you'll implement new services and utilize them to quickly and efficiently process machine-generated big data. You'll begin with an introduction to the new features, improvements, and offerings of Splunk 7. You'll learn to efficiently use wildcards and modify your search to make it faster. You'll learn how to enhance your applications by using XML dashboards and configuring and extending Splunk. You'll also find step-by-step demonstrations that'll walk you through building an operational intelligence application. As you progress, you'll explore data models and pivots to extend your intelligence capabilities. By the end of this Learning Path, you'll have the skills and confidence to implement various Splunk services in your projects. This Learning Path includes content from the following Packt products: Implementing Splunk 7 - Third Edition by James Miller Splunk Operational Intelligence Cookbook - Third Edition by Paul R Johnson, Josh Diakun, et al
Table of Contents (21 chapters)
Title Page

Using CSV files to store transient data

Sometimes it is useful to store small amounts of data outside a Splunk index. Using the inputcsv and outputcsv commands, we can store tabular data in CSV files on the filesystem.

Pre-populating a dropdown

If a dashboard contains a dynamic dropdown, you must use a search to populate the dropdown. As the amount of data increases, the query to populate the dropdown will run more and more slowly, even from a summary index. We can use a CSV file to store just the information needed, simply adding new values when they occur.

First, we build a query to generate the CSV file. This query should be run over as much data as possible:

source="impl_splunk_gen" 
| stats count by user 
| outputcsv...