Incident Severity Ratings

Much of digital security focuses on pre-compromise activities. Not as much attention is paid to what happens once your defenses fail. My friend Bamm brought this problem to my attention when he discussed the problem of rating the severity of an incident. He was having trouble explaining to his management the impact of an intrusion, so he asked if I had given any thought to the issue.

What follows is my attempt to apply a framework to the problem. If anyone wants to point me to existing work, please feel free. This is not an attempt to put a flag in the ground. We're trying to figure out how to talk about post-compromise activities in a world where scoring vulnerabilities receives far more attention.

This is a list of factors which influence the severity of an incident. It is written mainly from the intrusion standpoint. In other words, an unauthorized party is somehow interacting with your asset. I have ordered the options under each category such that the top items in each sub-list is considered worst, and the bottom is best. Since this is a work in progress I put question marks in many of the sub-lists.

  1. Level of Control


    • Domain or network-wide SYSTEM/Administrator/root

    • Local SYSTEM/Administrator/root

    • Privileged user (but not SYSTEM/Administrator/root

    • User

    • None?


  2. Level of Interaction


    • Shell

    • API

    • Application commands

    • None?


  3. Nature of Contact


    • Persistent and continuous

    • On-demand

    • Re-exploitation required

    • Misconfiguration required

    • None?


  4. Reach of Victim


    • Entire enterprise

    • Specific zones

    • Local segment only

    • Host only


  5. Nature of Victim Data


    • Exceptionally grave damage if destroyed/altered/disclosed

    • Grave damage if destroyed/altered/disclosed

    • Some damage if destroyed/altered/disclosed

    • No damage if destroyed/altered/disclosed


  6. Degree of Friendly External Control of Victim


    • None; host has free Internet access inbound and outbound

    • Some external control of access

    • Comprehensive external control of access


  7. Host Vulnerability (for purposes of future re-exploitation


    • Numerous severe vulnerabilities

    • Moderate vulnerability

    • Little to no vulnerability


  8. Friendly Visibility of Victim


    • No monitoring of network traffic or host logs

    • Only network or host logging (not both)

    • Comprehensive network and host visibility


  9. Threat Assessment


    • Highly skilled and motivated, or structured threat

    • Moderately skilled and motivated, or semi-structured threat

    • Low skilled and motivated, or unstructured threat


  10. Business Impact (from continuity of operations plan)


    • High

    • Medium

    • Low


  11. Onsite Support


    • None

    • First level technical support present

    • Skilled operator onsite



Based on this framework, I would be most worried about the following -- stated very bluntly so you see all eleven categories: I worry about an incident where the intruder has SYSTEM control, with a shell, that is persistent, on a host that can reach the entire enterprise, on a host with very valuable data, with unfettered Internet access, on a host with lots of serious holes, and I can't see the host's logs or traffic, and the intruder is a foreign intel service, and the host is a high biz impact system, and no one is on site to help me.

What do you think?

Comments

Lance Spitzner said…
In your last paragraph you point out 11 issues that would make up your worst case scenario. It can be argued that 7 or 8 of those points can be met by simply hacking an employees WinXP box with a standard malware attachment.

I really think people underestimate the risk/damage that can happen with the simplest of attacks today.
Anonymous said…
nice, but its still just a scenario which has to be translated into actual costs for the enterprise.

unless we dont have a way to measure every point in terms of money, we still wont have a way to get through to them.
the people we are addressing understand work-hours and speak money.

so in my eyes the next step would have be to calculate the cost for every point...
how many work-hours are we talking about, when we take down components to check, investigate and change things if the problem is domain-wide, per machine, per account, concerns shell, api, application...

hell of a lot of work and once we actually come up with some numbers there would surely appear ppl who would gladly second guess them...
still, if one would be able to calculate these costs, it would illustrate his/her boss what the security breach is (would be) really about.
such numbers would also make life lots easier when it comes to expenses for new equipment or further training... there would be a way to show a cost-benefit improvement for the intention.

i think such a rating has to come with a cost-statement. otherwise it still would be too abstract.
Anonymous,

You are definitely right about the money part, but assigning any costs whatsoever would be a mighty WAG, so much so that I think it would be worthless.
Unknown said…
In the Threat section, would you want to account for both unmotivated and insider attackers? Unmotivated to me would be like a crime of opportunity; a computer was open, grab it. Likewise, an insider may not be particularly skilled, but might have a level of knowledge of the environment that would cause him to be more dangerous.

For Nature of the Data, would you want to know the value of the data or, if possible the classification the organization uses? A database server housing financial information may not be a grave issue, but could be valued higher than a database with help desk tickets. I can see tying this into Business Impact along with Nature of the Data, but could affect the severity of an intrusion.

Otherwise, that listing is pretty tight as is!
You've got a good base of factors for consideration, but obviously the given categories are too complex. You've got a lot of _data_ here, but it hasn't really been channeled into information that can guide decisions.

Based on assessing these categories, I'd look at lumping things into a few buckets for management:

1) This issue will require disclosure, open us to legal proceedings, or will have financial impact in excess of N% of total corporate operational budget. This is the red bucket, the oh crap bucket, the severe bucket. The whole org will feel the impact of this one, and there will be permanent damage.

2) This issue will impact core business, will cause a noticeable issue in QoS, will cost in excess of N% departmental budget. This will hurt, but we can recover.

3) This is an issue that we can handle within the current operational budget. There may be localized issues with QoS, but this is the sort of stuff we are prepared to handle on a regular basis.

Looking back over these, organizational scope as far as where the effects will be felt seem to be the driving factor in categorizing the issue.

I'd note that management should be told what bucket things are in right now, AND how likely it is for things to escalate. This is where the skill and motivation of the attacker comes into play.

Borrowing your soapbox for a moment, any report should include information about the organization's ability to monitor further intrusion into the network.

KISS Principle!
Anonymous said…
Timely post as I've been bouncing ideas around the same subject. I braindumped how I'm approaching it over at http://electricfork.com/blog/37/defining-incidents

I based my system loosely on the Tornado Fujita Scale.
Anonymous said…
This comment has been removed by a blog administrator.
Since PDCERF includes a "preparation" aspect, I might also suggest something like "organizational handling/analysis preparedness level (for this incident type)":

1. Not prepared at all, no tools, training, or experience

2. Basic tools in place, but little/no training or experience

3. Experience or training in place, but little/no tools

4. Good experience/training in place and good tools in place

This is partially handled by some of the above items but it generally rolls up capabilities based on different incident types that creep up.
hogfly said…
Richard,
You may want to look at the Mercalli Intensity scale that's used to measure the impact of earthquakes. It ranges from 1-12 and I've found it to be highly accurate in assigning an impact statement or severity rating to an incident. It's much different than the richter scale for instance in that the mercalli scale recognizes that intensity and scope of damages can vary from site to site and impact has dependencies. I think you've got a fantastic start here. You could probably build what you have in to the scale pretty easily.

Hope that helps.
Reference: http://www.first.org/resources/guides/csirt_case_classification.html
Greenthumb said…
I think we spend far more time on "ratings/rankings/scales/etc" than is necessary. We have to remember that the main objective of a security program is to enable our business, to keep them running, making money, providing a service or a product, etc. I think if we focus too much time on getting the perfect rating for a compromise we lose sight of that.

That being said, I do believe it's important to have a way to objectively score an incident. I'm of the belief that less is more. Doing my own research I came across this IR plan used by a university. http://www.pcc.edu/about/policy/documents/Information-Security-Incident-Response-Plan.pdf

I like their approach to rating an incident. I think it's simple and has potential to be effective.

This is almost 10 years old, but I appreciate your interest. I've written a lot since then, some of which is more useful and/or applicable.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4