In my day job I work with many different companies, big ones, small ones, public ones, private ones, red ones, blue ones. When I'm not helping companies respond to incidents, I am usually helping them develop their IR plans or testing them through tabletop exercises. Out of all the clients I work with, one of the biggest common difficulties is defining what makes an IT Security Incident.
Let's first talk about why this definition is a critical part of an IR plan. Essentially this definition is the criteria that must be met in order to activate your IR plan. As such, it needs to be broad enough to encompass any IT Security incident, but specific enough that we are not responding to network outages.
The definition I generally start with for any company as a point of discussion is the following:
"An IT Security Incident is any event that is attributable to human root cause and malicious intent."
Let's break down this definition to understand why I wrote it this way.
Let's start with event. An event is some observable thing that happens. I use thing to keep it broad. In our context, this could anything from an alert in our Intrusion Detection System (IDS) to a user coming up to us in person and explaining a situation through interpretative dance.
So, now for every event our goal will be to determine if there is a human root cause AND malicious intent. Why those two things?
Human Root Cause: Technologies fail and natural disasters happen. To prevent us from competing with Disaster Recovery plans, we need to rule this out. This is why we add a human root cause. For instance, if a tornado relocates our data center for us, this would definitely fall under the Denial of Service (DoS) incident type.
Malicious Intent: Malicious intent is specifically added to qualify the human root cause aspect. Specifically, this is what keeps accidents, mis-configurations out of scope. As an example, a drunken sysadmin walking through the data center trips on the power cord to a server. As a result, the server shuts down and the revenue generating website is now offline. While this is attributable to a human root cause, we don't have malicious intent. Therefore it makes no sense to involve the IR team. Additionally, let's say a firewall is not properly configured. This is still not a security incident. In terms of taxonomy, this would an attacker's entry vector and perhaps also the root cause for an incident. In and of itself however, it is not a security incident.
To use this definition, as we analyze events to determine if they are false positives or not, we should assume there is human root cause and malicious intent. In this way, we are disproving this conditions which is safer than trying to prove them, not being able to, and not reacting to an actual incident.
Now, you may ask, what about a user that accidentally emails sensitive data outside the company. You may argue that this is a data breach and we have a security incident, and there this definition is crap! To which I respond nay.
The problem with this is a data breach is not a security incident by itself. Hang tight let me explain! A data breach does not occur by itself. It is usually the result of something else (the actual security incident). For example, malware infects a Point of Sale (PoS) terminal which steals credit card data and sends it to Djibouti. From an IT Security standpoint, your problem is a malware problem. That is the root cause. The breach of credit card data is just a symptom. Therefore, if you rebuild the PoS, and prevent the infection vector used to place the malware, you will solve the problem of credit card being sent to Djibouti.
Let's go back to the example of a user who accidentally emailed out sensitive data. If we use the NIST 800-61 document and reference Figure 3-1 which I attached below, we'll see the Incident Response Life Cycle. You'll notice to get Post-Incident Activity, first we need to perform the Containment, Eradication & Recovery phase. How do we contain an e-mail that has gone out? Do we eradicate the user? What do we do in the recovery phase?
However, the concern in the example is still real. If it's not a security incident, what is it? Well, it's what you thought it was, it's a data breach. If you went through the NIST document, you'll notice that it never really discusses dealing with a data breach. Instead it points you to OMB Memorandum M-07-16: Safeguarding Against and Responding to the Breach of Personally Identifiable Information. While this document is specifically focused at US government agencies, the commercial sector can implement something similar. What a company would implement is a Data Breach Plan. This plan would complement an existing IR plan. A Data Breach Plan effectively guides the business in responding to a data breach, which requires larger involvement from different business units than an IT Security incident.
In summary, this is a definition I use for companies that don't have an IR Plan are looking to start. It does its job in aligning the purpose of the IR Plan. One reason I don't use the NIST definition from the document referenced is because it requires the incident to have broken a policy. This is great if your organization has very well documented and easy to remember policies. For many of the clients I work with, this is not the case.
Last but not least, only four of you put your names in the comments of my March post entering the raffle for the EFF hat. With a 25% chance of winning, I'd like to say congrats to mrlanrat who won the hat and stickers. I'll be contacting you to arrange shipping! Thanks again to everyone who reads this blog!