Wednesday, July 18, 2007

No Undetectable Breaches

PaulM left an interesting comment on my post NORAD-Inspired Security Metrics:

...what if the enemy has a stealth plane that we cannot detect via radar, satellite, wind-speed variance, or any other deployed means? And what if your intel doesn't tell us that such a vehicle exists? Then we have potentially millions of airspace breaches every year and our outcome metrics are not helping.

I'm not disagreeing with you that outcome metrics are ideally better data than compliance metrics. However, outcome metrics are difficult to identify and collect data on, and it can be difficult to discern how accurate your metrics actually are.

At least with compliance metrics, we can determine how good we are at doing what it is we say that we do. It has little relevance to operational security, but it's easy and the auditors seem to like it.


For the case of a single breach, or even several breaches, it may be possible for them to happen and be completely undetectable. However, I categorically reject the notion that it is possible to suffer sustained, completely undetectable breaches and remain unaware of the damage. If you are not suffering any damage due to these breaches, then why are you even trying to deter, detect, and respond to them in the first place?

Let me put this in perspective by considering labels attached to classified information as designated by Executive Order 12356:

(a) National security information (hereinafter "classified information") shall be classified at one of the following three levels:

  1. "Top Secret" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause exceptionally grave damage to the national security.

  2. "Secret" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause serious damage to the national security.

  3. "Confidential" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause damage to the national security.


We want to protect the confidentiality of classified information to avoid the losses described above. What happens if we suffered sustained breaches (thefts) of Top Secret data? Are we not going to detect that our national security concerns are being hammered, since we are suffering "exceptionally grave damage"?

This is one way spies are unearthed. If your missions are constantly failing because the enemy seems to know your plans, then your suffering a breach you haven't detected it.

Finally, if you are suffering breaches and your input-based metrics aren't detecting them either, what good are they? Talk about a real waste of money. "It's easy and auditors seem to like it?" Good grief.

3 comments:

danny said...

Auditors like it because output is more deterministic. Vendors like it because they can tailor and position products as "solutions" to fill checkboxes and win earmarked budget. Consultants love it because they can understand and easily replicate it. Opertional security folks, hrmm.. perhaps a split; raises the bar on one hand, but imposes potentially needless requirements on the other. The offshoot for the end-user is that compliance is arguably a waiver of sorts in the event that something actually happens; if the checkboxes are filled, that is -- whether it actually enhances operational security or not.

Custom compliance-based metrics (i.e., versus the canned versions running about as of late), OTOH, coupled with what you're calling field-assessed and outcome-based metrics, are certainly more effective, IMO.

Finally, regarding your comment "I categorically reject the notion that it is possible to suffer sustained, completely undetectable breaches and remain unaware of the damage", I tend to agree, though considering the window for detection could be anywhere from seconds to decades, and if the latter, which it certainly has and can be, changes things.

PaulM said...

It's unfair to knock folks that use compliance metrics. Compliance is business. And it usually falls, at least in part, to infosec folks to implement and monitor. Compliance failures are their own incidents with their own levels of very real risk.

I would also argue that custom compliance-based metrics and canned (Big 4) compliance metrics suffer the same problems. They measure how well you adhere to policy and procedure. Thoughtful policies and procedures are the key. The metrics are secondary.

For instance, a policy might say that we monitor and respond to unauthorized use of administrative accounts. And a supporting procedure describes how we define unauthorized use, administrative accounts, describes how we will monitor these accounts, and sets an SLA/OLA for responding to detected violations. A metric might be to look at the number of detected incidents and the number of times that OLA was met or missed. That's pretty custom, and defining who, how, and when admin users are used is a good control. But is focusing on whether the response took 110 minutes or 130 minutes worth anything? Not to the security of the environment, but it could be worth real money when doing risk assessments and resource planning. So was it worth doing? I would have to say yes, even though I died a little inside just now.

Bottom line, outcome-based metrics are better at trending the actual security environment you are operating in, but they're hard to build well and sometimes even harder to collect data for. Compliance metrics are useful and have real value, even if that value is contrived from a regulatory requirement that may or may not improve the actual security environment.

http://www.architectsban.webs.com said...
This comment has been removed by a blog administrator.