Control-Compliant vs Field-Assessed Security

Last month's ISSA-NoVA meeting featured Dennis Heretick, CISO of the US Department of Justice. Mr. Heretick seemed like a sincere, devoted government employee, so I hope no one interprets the following remarks as a personal attack. Instead, I'd like to comment on the security mindset prevalent in the US government. Mr. Heretick's talk sharpened my thoughts on this matter.

Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the north-west be on the starting line-up? All of this seems perfectly rational to this team.

An outsider looks at the situation and says: "Check the scoreboard! You're down 42-7 and you have a 1-6 record. You guys are losers!"

In my opinion, this summarizes the mindset of US government information security managers.

Here are some examples from Mr. Heretick's talk. He showed a "dashboard" with various "metrics" that supposedly indicate improved DoJ security. The dashboard listed items like:

  • IRP Reporting: meaning Incident Response Plan reporting, i.e., does the DoJ unit have an incident response plan? This says nothing about the quality of the IRP.

  • IRP Exercised: has the DoJ unit exercised its IRP? This says nothing about the effectiveness of the IRT in the exercise.

  • CP Developed: meaning Contingency Plan developed, i.e, does the DoJ unit have a contingency plan should disaster strike? This also says nothing about the quality of the CP.

  • CP Exercised: has the DoJ unit exercised its CP? Same story as the IRP.


Imagine a dashboard, then, with all "green" for these items. They say absolutely nothing about the "score of the game."

How should the score be measured then? Here are a few ideas, which are neither mutually exclusive nor exceedingly well-thought-out:

  • Days since last compromise of type X: This is similar to a manufacturing plant's "days since an accident" report or a highway's "days since a fatality" report. For some sites this number may stay zero if the organization is always compromised. The higher the number, the better.

  • System-days compromised: This looks at the number of systems compromised, and for how many days, during a specified period. The lower, the better.

  • Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge: This is from my earlier penetration testing story.


These are just a few ideas, but the common theme is they relate to the actual question management should care about: are we compromised, and how easy is it for us to be compromised?

I explained my football analogy to Mr. Heretick and asked if he would adopt it. He replied that my metrics would discourage DoJ units from reporting incidents, and that reporting incidents was more important to him than anything else. This is ridiculous, and it indicates to me that organizations like this (and probably the whole government) need independent, Inspector General-style units that roam freely to assess networks and discover intruders.

In short, the style of "security" advocated by government managers seems to be "control-compliant." I prefer "field-assessed" security, although I would be happy to replace that term with something more descriptive. In the latest SANS NewsBites (link will work shortly) Alan Paller used the term "attack-based metrics," saying the following about the VA laptop fiasco: "if the VA security policies are imprecise and untestable, if the VA doesn't monitor attack-based metrics, and if there are no repercussions for employees who ignore the important policies, then this move [giving authority to CISOs] will have no impact at all."

PS: Mr. Heretick shared an interesting risk equation model. He uses the following to measure risk.

  • Vulnerability is measured by assessing exploitability (0-5), along with countermeasure effectiveness (0-2). Total vulnerability is exploitability minus countermeasures.

  • Threat is measured by assessing capability (1-2), history (1-2), gain (1-2), attributability (1-2), and detectability (1-2). Total threat is capability plus history plus gain minus attributability minus detectability.

  • Significance (i.e., impact or cost) is measured by assessing loss of life (0 or 4), sensitivity (0 or 4), operational impact (0 or 2), and equipment loss (0 or 2). Total significance is loss plus op impact plus sensitivity plus equipment loss.

  • Total risk is vulnerability times threat times significance, with < 6 very low, 6-18 low, 19-54 medium, 55-75 high, and >75 very high.

Comments

DavidJBianco said…
Great analogy. I can tell you from first-hand experience that you're spot on. The current security management paradigm doesn't care so much about the effectiveness of the security system. It's mostly designed to make sure that the program looks good on paper.

I think you've left out an important factor in your suggestions for metrics, though. Simply measuring compromises is useless, because in most cases the agency will not know that they are compromised. Using "Days since last compromise of type X" or "System-days compromised" is misleading, since the world's poorest security system would look great by simply failing to detect anything amiss. In order to get good metrics, you'd also need to evaluate the effectiveness of the IDS/NSM in place, perhaps using the pen test team as a yardstick (e.g., "Number of pen test team attacks detected" or "Average response time to pen test incidents").
Anonymous said…
Also not well thought out, but what about the following basics:

- Percentage of Desktops with fully patched OS
- Percentage of Desktops with fully patched software
- Percentage of Servers with fully patched OS
- Percentage of Servers with fully patched software

- Percentage of Workstations and Servers with Anti-Virus and/or Anti-Spyware and basic (aka Windows XP SP2) firewall

- Number of Quarters since last internal Pen Test
- Number of Quarters since last external Pen Test

Obviously lower numbers are better...
David Bianco (to differentiate from other David),

You make a good point. I do not want to promote an "ignorance is bliss" attitude. That is why an independent IG-type group would have to do the assessments and also monitor. If the end units are responsible ultimately for reporting their compromises, their incentive is to ignore/lie.
Second David,

Those are all good items to track, but they are still not measurements of real security. A fully patched system can still be compromised if misconfigured, misdeployed, etc.
Anonymous said…
The federal government has something called a "DAA" who is supposed to be responsible for the security of these systems. All they have to do is fire one of them the next time another one is negligent and the federal government would do a 180. Until that time metric will be used to increase their budgets.

Would the private sector tollerate such things if it cost a company millions?
Anonymous,

Unfortunately, private companies tolerate millions in losses all the time.
JimmytheGeek said…
It's the essence of a bureaucrat to point at a policy document and say, "Problem solved. Got this policy here." Now, if we just had a policy that there be no World Hunger...

There's a huge danger of skewing the metrics if rewards and punishments are tied to them. People are self-interested, occasionally rational creatures. So it takes some leadership to avoid data-corrupting gamesmanship. That's a scarce commodity. The trick is to reward steps that should generate an improvement in the numbers, rather than basing rewards on the numbers themselves.

Part of the problem with the "days since compromise" metric is that not everything is under an organization's control. The number of threats is independent of internal actions. So even if you do everything right, make the most important improvements, etc., you might have more compromises if the quantity and quality of threats (by your definition) goes up. A good leader should be able to reward a team that did everything right, even if a barrage of 0-day attacks succeed.

I think it's fundamentally a qualitative discipline, not quantitative. I like the notion of % workstations that are fully patched, but like you say, if misconfigured the patches won't help. Yikes! Computer Security is more like Ice Dancing than the running the 440!

As for the risk assessment: every time you perform arithmetic operations on ordinal numbers, God kills a kitten. Both factors of the Vulnerability figure are pulled out of the air. Seriously. How do you rank vulnerabilities you haven't heard about yet? You come up with a SWAG. Threat factors are pulled out of a more odiferous place - "Our unknown assailants are|are not experts." How the hell do you BEGIN to assign a value to that? Can you buy a Threat Meter from Fluke? Even if these were good numbers, they are rankings. You can't take the order of finishers in a race and add that number to the order of height and get anything meaningful! You can't say the second place finisher was twice as slow. All you can say is that runner finished between #1 and #3.

And what does the risk assessment try to do??? The probability of an internet connected host with a known vulnerability being compromised over a long enough period is 1. If you are simply doing triage, so the most important stuff gets protected first/best, this is a relatively unhelpful process. You already know the answers, and you'd just pick numbers that will give the results that let you do what you were going to do anyway. The only real impact this kind of thing can have is to misdirect. Say you assign low values to individual workstations, because servers are more important. The assessment drives a decision that workstations get less attention, and they wind up compromised en masse. "But the risk assessment said..."

I'm in the middle of a Security Policy audit, and it is the biggest waste of time. I could be taking steps to improve the visibility of my network and hardening its elements. Instead, I am filling up a binder with dead trees. Said binder will reflect, not drive behavior.
Anonymous said…
Jimmy,

I catch your drift, but you are getting the main point of adopting Security Policy!

Yes, there are some security professionals that make the numbers look better than they really are, but it only hurts them in the end. They will realize this when, as Richard has put it, the company looks at the score.

You start out with some idea of what you need to protect, and how imporant they are to protect. Then after either successful attempts or caught attempts, you re-evauluate your equation.

Risk assesment allows us to reflect on our own security practices, and refine them to mitigate damange.

As quoted from the from the blog itself, "Do you want a defensible network or not?" Being able to defend against any attack is more imporant then believing you can stop them all together.

I hope you re-asses the need for a security policy, and see the importance in it. While it seems a waste of time now, you will be able to change your defenses faster and keep the attackers in check than without one.
Anonymous said…
re: the metrics would "discourage DoJ units from reporting incidents"

The CISO of the DOJ does not have the authority to compel units to report compromises of DOJ assets? Sounds like the issue may be one of a lack of authority and competence.
JimmytheGeek said…
Jake - I would never argue that having a plan or a policy is a bad thing in itself. It's a _good_thing_ to look at the assets you are protecting and assess whether the protection is good and whether your assessment has any basis in reality.

However, I don't think playing games with numbers is a useful tool, and I don't think the report cards Federal Agencies are getting (or the Policy Audit I'm going through) are anything but a waste of time.

We already have the plan - now we have to fit it to some bureaucrat's notion of how it should be formatted. We have to describe how we are doing things that cannot be done by anybody, like secure a wireless network (they speak in absolutes, so WEP/WPA does NOT cut it per the plain meaning of the language), or insure that end-users can not modify settings of hosts they have physical possession of.

We have to assign ranking numbers and combine them in all sorts of mathematically unsound ways, to come up with answers we already know. It's idiocy.

Of course, I could recover some of the time lost by cutting back on my ranting, but what fun is that?
Anonymous said…
Forgot to sign the post above...but that was me.

On a slightly different note, do you like the ISSA organization? I just recently found out about it and have been looking into membership when I move to the Seattle area. I can bet that experience is based almost fully on the local chapter, but is it fairly similar to Infragard or something? Just curious on your opinion. :)

-LonerVamp
LonerVamp,

I am a member of my local ISSA chapter. They usually have good speakers. I don't know about other ISSA chapters.
Anonymous said…
Strike me claiming a post above...I must have not submitted it properly. :(

- LonerVamp
Anonymous said…
I have attended some of the local Seattle ISSA chapter meetings and they are usually pretty good. You can see a list of current and past Seattle chapter meetings here to get an idea of the topics they address and the speakers they invite.

http://www.issa-ps.org/index.php?option=com_content&task=blogcategory&id=14&Itemid=44
Anonymous said…
If I understand the scoring system at the bottom of the article, the most severe incident that does not have loss of life barely makes it into thwe "high" category.
No loss of life, high exploitability, low countermeasures, high capability, high history high gain, low attibutability, low detectability, high sensitivity, full loss of operability and equipment - and its only high?

What can be more severe than the above situation during an incident?

Seriously flawed risk metric scoring, imho
Anonymous said…
I am sorry I missed this earlier. From first hand experience, DOJ and Mr. Heretick concentrate way too much on metrics and not enough on 'real security'. They continue to drive the components absolutely nuts with this sort of emphasis.
Anonymous said…
I've worked in the group that Mr. heretick has responsibility for, and much of what is said here is true. However, understand that DOJ is most concerned with not having Congress up their butts. If they do not score "green" on every item, the AG and CIO are held accountable and pummeled until DOJ makes the phony metrics. I spent many years building and testing worthless plans that were auditor proof. So our group looked really good on paper. Everyone knew the plan would not work in real life. Heretick is pressured from above to make the metrics, and he brought the discipline to meet that mandate from DIA, where he worked to MEET THE METRICS! This is what the government does. Be afraid, be very afraid!
Anonymous said…
DoD uses red teams to attack the network ("ethical hacking" some call this). Heard a recent speach by DHS Asst Secretary Garcia that they are looking at this. DoJ probably will follow suit.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4