Book Image

Operationalizing Threat Intelligence

By : Kyle Wilhoit, Joseph Opacki
Book Image

Operationalizing Threat Intelligence

By: Kyle Wilhoit, Joseph Opacki

Overview of this book

We’re living in an era where cyber threat intelligence is becoming more important. Cyber threat intelligence routinely informs tactical and strategic decision-making throughout organizational operations. However, finding the right resources on the fundamentals of operationalizing a threat intelligence function can be challenging, and that’s where this book helps. In Operationalizing Threat Intelligence, you’ll explore cyber threat intelligence in five fundamental areas: defining threat intelligence, developing threat intelligence, collecting threat intelligence, enrichment and analysis, and finally production of threat intelligence. You’ll start by finding out what threat intelligence is and where it can be applied. Next, you’ll discover techniques for performing cyber threat intelligence collection and analysis using open source tools. The book also examines commonly used frameworks and policies as well as fundamental operational security concepts. Later, you’ll focus on enriching and analyzing threat intelligence through pivoting and threat hunting. Finally, you’ll examine detailed mechanisms for the production of intelligence. By the end of this book, you’ll be equipped with the right tools and understand what it takes to operationalize your own threat intelligence function, from collection to production.
Table of Contents (18 chapters)
1
Section 1: What Is Threat Intelligence?
6
Section 2: How to Collect Threat Intelligence
12
Section 3: What to Do with Threat Intelligence

Threat intelligence maturity, detection, and hunting models

In the context of CTI, there are many maturity and hunting models for organizations to consider. In particular, there are three maturity models that are widely leveraged that will be discussed in this chapter. Each model approaches different core problems using the Threat Intelligence Maturity Model (TIMM) by looking at the organization's overall intelligence maturity relative to a CTI program's adoption. Then, there's the threat Hunting Maturity Model (HMM), which addresses and defines an organization's hunting maturity rating. Finally, there's the detection maturity model, which is used to address an enterprise's ability to detect malicious behavior and will help an organization rate its attack detection capabilities and relative maturity.

While not all organizations have the relative capabilities to hunt through their data or have established CTI practices, it is important to rate and track the maturity of your threat intelligence program, its detection capabilities, and determine the organization's ability to hunt through data, if applicable.

TIMM

First published by ThreatConnect, the TIMM is intended to enable an organization to rate the maturity of a CTI function within an enterprise. Each level is distinct, starting at the least mature, or level 0, and going all the way to the most well-defined CTI program at maturity level 4:

  • Maturity level 0: Organization is unsure where to start.
  • Maturity level 1: Organization is getting accustomed to threat intelligence.
  • Maturity level 2: Organization is expanding threat intelligence capabilities.
  • Maturity level 3: Organization has a threat intelligence program in place.
  • Maturity level 4: Organization has a well-defined threat intelligence program.

Let's examine each maturity level in detail:

Figure 1.6 – Maturity levels

Figure 1.6 – Maturity levels

Maturity level 0 – organization is unsure where to start

Maturity level 0 is defined by an organization that doesn't have any threat intelligence program or experience in threat intelligence. Usually, threat intelligence programs start their life as threat collection programs. Typically, at this level, the organization has no staff that is solely dedicated to CTI, and it is likely that any staff dedicated to threat hunting is not formalized in any fashion.

A great starting point to mature from level 0 includes collecting, storing, and aggregating organizational log data from endpoints, servers, or any connected device. Ideally, aggregation can occur in a systemic and formalized way, such as with a Security Information and Event Management (SIEM) tool.

Maturity level 1 – organization is getting accustomed to threat intelligence

Maturity level 1 is when the organization starts becoming accustomed to threat intelligence. Organizations at this level are typically starting to understand the vast nature of the threat landscape. Organizations have basic logging, with logs often being sent to a SIEM tool. Often, analysts suffer alert fatigue due to the lack of resourcing, the lack of alert tuning, event overloading, or a combination of all of those factors.

Analysts operating at level 1 will typically block and alert based on triggered rule alerts from a system such as an Intrusion Detection System (IDS), sometimes enabling analysts to perform rudimentary hunting. Analysts at level 1 usually leverage a centralized SIEM. In level 1, analysts are typically trying to tune alerts to make analysis more easily accessible. From a human capital perspective, organizations at level 1 will sometimes have limited cybersecurity staff performing threat hunting and intelligence.

While an organization rated as level 1 is still maturing and is reactionary in its approach, a great starting point to mature from level 1 to level 2 includes automating and tuning alerts in a SIEM or similar environment on top of considering an additional headcount that's necessary for scaling a threat hunting organization.

Maturity level 2 – organization is expanding threat intelligence capabilities

Organizations finding themselves at maturity level 2 will find that they are maturing in their CTI capabilities. Most often, level 2 is where you will see organizations draw contextual conclusions based on the intelligence they're generating. Typically, organizations operating at level 2 are collaborating to build processes that can find even the most basic indicator's role in the vast landscape of a criminal cyber attack, for example. To facilitate this level of automation, CTI teams use scripts or a TIP.

Teams operating at level 2 will often find themselves ingesting data feeds that are both internal and external from a litany of threat intelligence providers and data. Teams at level 2 will often start the shift from a reactive approach (for example, blocking indicators on a firewall from an active incident) to a proactive approach (for example, proactively blocking indicators from a high-fidelity enriched feed from a threat intelligence provider). In many organizations, there might be one or two full-time analysts dedicated to a CTI function.

Organizations looking to mature from level 2 to level 3 should be focusing on security automation. Security orchestration should also be a focus area during the maturation process within level 2. Both automation and orchestration can be done in a combination of ways, including analysts creating custom scripts and tools to help automate their key workflows. One primary key to mature to level 3 includes the ability of the CTI team to create their own intelligence.

Maturity level 3 – organization has a threat intelligence program in place

Maturity level 3 is a level that many organizations won't reach, and that's perfectly fine. Not all organizations will have the same level of funding and resourcing available to achieve level 3. Maturity level 3 is defined by a team of security analysts or threat intelligence analysts with semi-automated workflows that are proactively identifying threat activity possibilities. It is common for this team to have incident response and forensics functionality in addition to CTI capabilities.

Processes and procedures have been thoroughly developed in level 3, and analysts working in the CTI function are typically tracking malware families, TAGs, and campaigns. A TIP is a commonplace finding at organizations at maturity level 3, which gives analysts the capability to store and analyze intelligence over a long period of time. Security orchestration might be in place for level 3, but it is likely not fully integrated into end-to-end security operations.

Workflows designed at level 3 should allow full intelligence integration into a SOC, detection engineering, incident response, and forensics functions. This enables these business functions to make proactive and reactive decisions based on intelligence provided by the CTI team. Analysts should focus on adding context to indicators identified as opposed to merely focusing on individual indicators of maliciousness. This, in turn, is the process of a level 3 maturity team creating their own intelligence versus merely consuming others' intelligence. Analysts should find themselves asking questions, such as what additional actions are related to this indicator?

Organizations that are maturing from level 3 to level 4 should focus on integrating orchestration, incident response, and intelligence enrichment into all security operations. Businesses that have reached maturity level 4 should also focus on deriving strategic value from the threat intelligence they're generating versus just tactical intelligence generation.

Maturity level 4 – organization has a well-defined threat intelligence program

Maturity level 4 is a step that many organizations strive to achieve, but few actually do. Due to a combination of funding, staffing, and inexperience, many organizations struggle to reach level 4 maturity. Organizations at level 4 maturity have stable threat intelligence programs with well-defined, formalized processes and procedures with automated and semi-automated workflows that produce actionable intelligence and ensure an appropriate incident response. Organizations operating within level 4 often have larger organizational functions, with mature procedures to provide intelligence to a litany of internal service owners, such as the organizational incident response function.

Organizations in level 4 will continue using the TIP mentioned in previous levels, with CTI teams beginning to build a security analytics platform architecture that allows your analysts and developers to build and run their own tools and scripts tailored to the unique organizational requirements. Teams operating at level 4 utilize automation as much as possible, such as leveraging the API feeds of a targeted attacker activity that's automatically ingested into a TIP. The CTI analyst can vet the intelligence and pass it to security operations for blocking.

A primary differentiator in level 4 is the amount of organizational buy-in for CTI functions. CTI functions at level 4 enable business decisions at the highest levels, including both strategic decisions and tactical decisions.

Now that we've covered the TIMM, let's examine an additional model to consider for implementation: the threat HMM.

The threat HMM

Organizations are quickly starting to learn the importance and benefit of threat hunting. The best foundation for beginning threat hunting is to follow a standard model that not only measures maturity but also ensures a systematic process is being followed by analysts themselves. Before we can discuss the concepts related to the threat HMM, first, we need to approach the question of what is threat hunting?

Threat hunting can be best described as the process of proactively and systematically hunting through organizational logs to isolate and understand threat activity that evades an enterprise's compensating security controls. The tools and techniques that threat hunters employ are often varied, with no single tool being the silver bullet. The best tool or technique almost always depends on the threat the analyst is actively hunting.

It is important to note that hunting is most often done in a manual, semi-automated, or fully automated fashion, with the distinct goal of enabling detection and response capabilities proactively by turning intelligence into a detection signature.

The threat HMM was developed by David Bianco and describes five key levels of organizational hunting capability. The HMM ranges its levels of capability from HMM0 (the least capable) to HMM4 (the most capable):

  • HMM0: Initial
  • HMM1: Minimal
  • HMM2: Procedural
  • HMM3: Innovative
  • HMM4: Leading

Let's examine each HMM level.

HMM0 – initial

The first level is HMM0, which can best be described as an organization that relies primarily on automated alerts from tools such as IDS or SIEM to detect malicious activity across the organization. Typically, organizations in HMM0 are not capable of hunting through their enterprises proactively. Feeds may or may not be leveraged in HMM0, and they are typically automatically ingested into monitoring systems, with little to no enrichment applied. The human effort in HMM0 would primarily be to resolve alerts generated from detection tools.

Data sourcing in HMM0 is usually non-existent or limited, meaning that, typically, organizations do not collect much in terms of data or logs from their enterprise systems, severely limiting their proactive hunting capabilities.

HMM1 – minimal

An organization operating in HMM1 still primarily relies upon automated alerting to drive its detection and response capabilities and processes. Organizations in HMM1 are primarily differentiated by their sources of collection. In HMM0, we learned that organizations had limited internal data sources (for example, endpoint logs), with no structured way of looking through those logs. HMM1 organizations find themselves collecting, at the very least, a few types of data from across the enterprise into a central collection point, such as a SIEM.

Analysts in HMM1 are able to extract key indicators from alerts and reports and search historical data to find any recent threat activity. Because of this search capability and limited log collection, HMM1 is the first level where true threat hunting happens despite its limited nature.

HMM2 – procedural

Organizations in HMM2 find themselves with the capability to follow procedures and processes to perform basic hunting across enterprise datasets (for example, endpoint logs). Organizations in HMM2 often collect significantly more data from across the enterprise, such as firewall logs, endpoint logs, and network infrastructure logs.

It is likely that organizations in HMM2 won't have the maturity to define new workflows or processes for themselves, but they are capable of hunting both historically and, in some cases, proactively.

HMM2 is typically the most common level witnessed among organizations that employ active programs.

HMM3 – innovative

Many hunting procedures found throughout enterprises focus on the analysis techniques of clustering similar behavior (for example, detecting malware by gathering execution details such as Windows Registry modifications and clustering activities identified elsewhere across the enterprise). Enterprises in HMM3 find themselves not only proactively hunting through a litany of internal log data sources, but they are also performing a grouping and clustering of activity. This clustering or grouping of activity involves identifying similar clusters of threat activity to proactively block, monitor, or further assess. Additionally, organizations operating in HMM3 often have highly skilled threat hunters who are adept at identifying nefarious activity across information systems or networks.

Typically, analysts in HMM3 leverage grouping and clustering to identify new threat activities that are bypassing traditional security controls. Analysts performing in HMM3 can identify nefarious activity while sorting through a needle in a haystack. Traditionally, automated alerts are highly tuned, with very little noise being produced.

As the number of hunting workflows and processes develops and increases, scalability issues that might pop up will be solved in HMM4.

HMM4 – leading

Enterprises in HMM4 are leading the way in terms of defining procedures that organizations in HMM0–HMM3 generally follow. Organizations in HMM4 are advanced in terms of log collection, alert tuning, and the grouping/clustering of malicious activity. Organizations in HMM4 have well-defined workflows for detection and response purposes.

Automation is heavily employed in HMM4, clearly differentiating it from HMM3. Organizations in HMM4 will convert manual hunting methods (such as pulling WHOIS information for a domain being used as part of C2 infrastructure) into automated methods (such as automatically enriching domain intelligence with WHOIS information). This automation saves valuable analyst time and provides the opportunity for analysts to define new workflows to identify threat activity throughout the enterprise.

The detection maturity model

Ryan Stillions published the Detection Maturity Level (DML) model in 2014, but it is still useful today to measure organizational maturity. At its core, DML is a detection model intended to act as an assessment methodology to determine an organization's effectiveness of detecting threat activity across information systems and networks. DML is used to describe an organization's maturity regarding its ability to consume and act upon given CTI versus assessing an organizations' maturity or detection capabilities.

It's important to note there is a distinction between detection and prevention. As its name implies, the detection maturity model deals directly with detection versus prevention.

The DML consists of nine maturity levels, ranging from eight to zero:

  • DML-8: Goals
  • DML-7: Strategy
  • DML-6: Tactics
  • DML-5: Techniques
  • DML-4: Procedures
  • DML-3: Tools
  • DML-2: Host and network artifacts
  • DML-1: Atomic indicators
  • DML-0: None or unknown

The lowest of these levels is the most technical with the highest being the most technically abstract, disregarding level zero, of course.

Let's examine the detection maturity model in greater detail.

DML-8 – goals

Being the most technically abstract level, determining a threat actor's goals and motivations is often difficult, if not impossible, in some circumstances. The threat actor could be part of a larger organization that receives its goals from a source higher up in the operation. Additionally, the goals might not even be shared with the individual that has a hands-on keyboard. If the goals are criminal in nature, it is often hard to determine the motivation of the attacker.

In some cases, goals are easy to determine, such as ransomware, which, typically, has a very clear motivation and goal. Many times, determining a goal is merely guessing at what the attacker's true goals were based on the behavior and data observations of lower DMLs (for example, stolen data, targeted victims, and more).

DML-8 is, typically, what C-level executives are most often concerned with, with who did this, and why? being an extremely common question when called into a board room.

DML-7 – strategy

DML-7 is a non-technical level that describes the planned attack. Usually, there are several ways an attacker can achieve its objectives, and the strategy determines which approach the threat actor should follow. Threat actor strategies vary based on goals and intent, such as a shorter-run criminal attack. Determining a threat actor's strategy is often partially speculative in nature, with observations drawn from behavioral and data observations over a period of time. A good example of this type of observational information being built over time includes the threat actor known as Sofacy. Sofacy has been tracked for years throughout the security industry, with new and unique attacks and new tool development occurring routinely. Watching this actor evolve over time can help inform an analyst of the attacker's intent, but without evidence, there is a degree of estimation.

It is important to note that both DML-7 and DML-8 are often hypothetical in nature. For this reason, they are not easily detectable via conventional compensating security controls.

DML-6 – tactics

In order to succeed in DML-6, an organization's analysts should be able to reliably detect a tactic being used regardless of the technique or procedure used by the threat actor. Typically, determining a tactic is a diverse process, done over time, most akin to profiling an attacker. A good example of this includes the activities identified in Gorgon Group, which were first identified by Palo Alto Networks. This blog details the tactical details of a cybercriminal and nation-state espionage actor that played out over a long period of time. Detailing the actor's TTPs over time gives explicit details about operational cadence, TTPs, capabilities, and in some cases, motivation.

Tactics form the first technical level of the DML. In most cases, tactics are not detected by a single IOC or single detection alert or signature. Tactics are typically identified by skilled analysts, rather than technical correlation.

DML-5 – techniques

Traditionally speaking, being able to detect an adversary's techniques is superior to determining their procedures. Techniques differ from procedures in that techniques are usually correlated to the individual versus correlation to a group.

Many threat actors aren't aware that when they perform attacks, they leave behind digital breadcrumbs helping analysts determine the specific techniques employed. DML-5 is primarily concerned with determining the techniques of an individual actor.

DML-4 – procedures

The process of determining actor procedures makes it effective at detecting adversary activity throughout an enterprise. In its simplest form, determining procedures isolates the threat actor activity that appears to be performed methodically two or more times during a specific time period that is deemed accurate by the organization.

Many of the procedures identified at this stage help an analyst determine broad behavior patterns, such as identifying procedures that would include a threat actor systemically connecting to victim systems and dumping credentials for lateral movement. As such, detection and alerting on procedures are typically broader in scope.

DML-3 – tools

Determining the specific tools that a threat actor employs is often not difficult and can provide a wealth of intelligence to a CTI analyst. Being able to detect adversary tools means you can reliably detect tool activity and the variations and functionality changes that the tool might experience.

Detecting tool functionality can be broken down into two categories: transfer and presence and the functionality of the tool. Both will be examined in detail:

  • Transfer and presence: This is the ability to identify the transfer and presence of the tool on either a server/endpoint or across the network. Additionally, this identifies active usage in the environment.
  • Functionality: This is the ability to identify the functionality of the tool via analysis techniques, such as static reverse engineering.

Detections are typically built from analysis originated by the transfer and presence and the functionality of the tools themselves.

DML-2 – host and network artifacts

Many organizations spend a lot of time focusing on detecting host and network-based artifacts. Being perhaps the easiest of all to detect, host and network indicators are simply artifacts that are observed before and after an attack. If those tools or malware change, for example, even in the slightest sense, the detection methodology and strategy would shift.

While technical in nature, DML-2 is considered rather rudimentary when compared with more holistic detection methods, such as those found in DML-3, DML-4, and DML-5. Attribution poses an additional challenge when looking at detecting host and network artifacts. CTI analysts should never attribute tools to a specific threat actor, group, or country based on just host and network artifacts alone. Many tools are spread across threat actors and are shared, making it extremely difficult, if not impossible, to attribute a tool to a TAG.

DML-1 – atomic indicators

Atomic indicators are indicators that cannot be broken down into smaller parts and, due to that, can't retain their meaning in the context of the intrusion activity. DML-1 is considered one of the most rudimentary of all detection methodologies. Some examples of this include IP addresses, domains, and URLs. Detection at this level usually comes in the form of malware hashes, domains, URLs, IP addresses, and other technical indicators specifically related to attacker activity.

While technical in nature, atomic indicators are rather weak from a detection benefit perspective. Atomic indicators are temporal in nature; they are temporary and prone to change. Additionally, atomic indicators lack additional context and often provide little intelligence value. Detection and response methods employing atomic indicators are usually playing whack-a-mole, by blocking specific indicators that are constantly changing.

DML-0 – none or unknown

DML-0 is reserved for organizations that want but do not have detection capability, or organizations that aren't mature enough to recognize the need for a CTI function. Organizations operating in DML-0 often don't have robust logging solutions to facilitate internal threat hunting. Organizations in DML-0 often have cybersecurity staff, but they are unlikely to be devoted to threat hunting.

In the Threat intelligence maturity, detection, and hunting models section, we've examined three unique models that can be used for organizational maturity in different functional areas within the CTI field. Specifically, we've examined the threat intelligence maturity, detection, and hunting models that should be used to determine your own organization's maturity. The models discussed will help your organization assess its overall threat intelligence maturity, as well as the concepts of hunting and detection of threat activity. Leveraging one of these maturity models will help organizations adapt and make meaningful decisions to mature the CTI function.

In the next section of this chapter, we'll coalesce all the information found throughout the chapter to determine what you can actually do with the intelligence once it's been collected and enriched.