Cyber threat intelligence or CTI is touted to be the next big thing in InfoSec. But does it narrow the security problem or compound it?
Cyber threat intelligence (CTI) is one of the hottest topics in our industry right now and the noise surrounding it is deafening. Gartner and Forrester are covering the sector. Run a quick Google search and you’ll find nearly 2 million results, including a wave of product offerings and the creation of dedicated intelligence teams to help you build those offerings.
What is CTI? Ah, definitions! But yes, semantics do matter. To borrow a line from the Chris Farley comedy, Tommy Boy, “Just because you slap a label on it that says cyber threat intelligence, doesn’t make it intelligence…” Much like the now ubiquitous banner of cloud computing, cyber threat intelligence risks becoming a watered-down phrase, the purpose of which is to sell existing technology solutions. That is, unless we are careful to use the term consistently and in the right way.
I find it critically important to recognize the fundamental difference between raw or processed information and real intelligence. The former supports the latter but they are distinctly different in their value. A useful comparison of the difference between information and intelligence is summarized below. It is largely driven from the pioneers in the field of intelligence, our governments and militaries around the world, as well as the information science community.
Info vs. Intel
Our militaries and governments have spent thousands of years (and much in the way of resources) to create and disseminate useful intelligence. Even if they don’t do it perfectly, nor always move as quickly on new threats or new technologies as we would like, we can clearly turn to them as experts in defining CTI. For example, the FBI has a published definition for intelligence that is useful: “simply defined, intelligence is information that has been analyzed and refined so that it is useful to policymakers in making decisions – specifically, decisions about potential threats to our national security.” Substitute “security and business professionals” for “policy makers” and change “national” to “corporate” or “organizational” and you’ve got a good working definition.
Similarly, information science gives the Data, Information, Knowledge, Wisdom (DIKW) hierarchy, in which data supports information which in turn can be used to create knowledge and ultimately enable wisdom. An example: all sensor data points could be data, and all the alerts within that data could be information. But knowledge comes with added context to those alerts, and wisdom could be the set of knowledge distilled into the skill of the security operator who is an expert on the process of distilling data to context and could create new automated technologies to do the same.
So going back to your challenge of figuring out how to leverage cyber threat intelligence in your organization, Gartner has published a useful definition: “evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response…” That’s definitely more than data and information.
More data, more problems
As you approach your specific implementation, don’t equate cyber threat intelligence with raw information. Security teams and the technologies they employ don’t need more raw data or raw information — they’re already swimming in it. Haystacks of haystacks of event data pile in from sensors. Multitudes of open-source and commercial data feeds dump bad IP addresses and other unevaluated indicators into your environment via machine-to-machine consumption leaving security teams to sort it all out. Even feeds delivering indicators with reputation scores are barely information that will leave security personnel wondering if a “71” is really bad, or the difference in risk between a “99” and a “94.”
Cyber threat intelligence needs to include much more than raw data and information. It requires rich contextual knowledge that can only be created with the application of analysis, or it’s not really intelligence. Contextual knowledge includes an understanding the past, present, and future methodologies of a wide variety of adversaries. It incorporates the contextual linkage between technical indicators, infrastructures, tactics, techniques and procedures (TTPs), campaigns, and the motivation and intent of adversaries who are employing them and information about who is being targeted.
Cyber threat intelligence starts with solid data and information gathered through collection, research, and identification of real threats from ongoing monitoring of malicious groups and actors from within the global threat ecosystem. Without contextual analysis, there is no support for the decision making process pointed to by both the FBI and Gartner as the core value of intelligence. Human analysis empowered by and infused into technology automation enables the creation of timely and accurate intelligence. Intelligence that is specific, vetted, and also rich in context and actionable can inform real severity of alerts, can help on incident response, can improve decisions on how to prioritize and respond to an existing or emerging threats, and can even inform the development of a security strategy to proactively invest in control that address real threats to your organization.
So as you approach the adoption of cyber threat intelligence, consider three things:
1. Capturing more event data and fusing it with mountains of raw data and feed information will necessitate a dedicated team of analysts to sort through it;
2. If you plan to conduct intelligence analysis in-house, you’ll want to look for partners who have the human skills — including global cultural knowledge and presence to extend your reach;
3. If you are not planning on hiring a dedicated intelligence team, be sure to look for partners that provide real intelligence.
In the end, more data and information will only overstress technologies, exacerbate false positive alerts, create even more work for people who are probably already overloaded, and create a dilemma about what security teams should prioritize. Clearly, providing a dump of raw data into an already strained organization doesn’t help to narrow the security problem — it actually compounds it.