Final Question on FAIR

I'd hoped to not have to say anything else on FAIR, but I've decided to ask one final question. If I can't get a straight answer to this question I'm giving up on the discussion. So, here it is.

In Thoughts on FAIR I walked through the section Analyzing a Simple Scenario. Steps 3, 4, 5, 8, and 9 [Estimate the probable Threat Event Frequency (TEF), Estimate the Threat Capability (TCap), Estimate Control strength (CS), Estimate worst-case loss, and Estimate probable loss, respectively] each require the user to "Estimate."

Do these Estimates matter to the output of the model?

If the answer is yes, then those answers are important and should be grounded in reality -- not opinions. If FAIR proponents agree with this, then we have been debating for days for no real reason. That would make me happy.

If the answer is no, then what is so magical about FAIR that garbage in does not produce garbage out?


Clint Laskowski said…
Garbage in = Garbage out.


A brilliant summary to the whole discussion.
Gotya said…
Dear Richard,
I don't think anyone can argue against 'GIGO'. However

1. this does not say anything either for or against the system processing the 'G' - in this case FAIR - If Garbage were input into the most complex system, it cannot but spit out garbage. Your question actually validates Alex's claim - - that the comments are not about 'FAIR' in reality.

2. you equate 'estimates' with 'opinion'. Which is where I think you gloss over the fact that subject matter expertise matters. E.g. in the field of IT Security/ Risk Management etc. you are having tremendous expertise and obviously your opinion is very well different and heavier (as it should be) than the opinion of a "Gautam Sarnaik". Purely because of your wide and in-depth experience as against mine. I believe 'estimates' allow this subject matter expertise to be factored into the model. So whats wrong with that?

Best Regards
Gautam Sarnaik
Tomas said…

Yes if you provide garbage to FAIR, you get garbage as result, but why would you provide garbage?

FAIR is about capture knowledge/belief and make the best inference from that. It is not about objectively measure the risk.

If you input values to the best of your knowledge, you will get the best inferred result from that input.Of course, the result will not be completely accurate, but it will be the best we can get.

And similarly how Gautam Sarnaik puts it above: your "opinion" based on you expertise is likely more correct than his or mine, so your input to FAIR will likely be more correct than ours and thus the result will also likely be more correct.

Anonymous said…
Proponents of FAIR should read Andrew Jaquith's "Security Metrics". A 'Subject Matter Expert' in the field of measuring Security categorically states that the tradition of attempting to measure topics such as "ALE" only detract from measuring 'good' metrics. 'Good' metrics focus on 'process measurement' and 'key performance indicators', which can be analyzed and proven; unlike ALE which are extremely sensitive to small changes in 'assumptions'.
Unknown said…

I think you're missing Richard's point here (or maybe I am), but what I think he is getting at here is that garbage (intentional or unintentional) is inherent in this type of process. The FAIR model *may* be great as a model, but the challenge of input is a large one and one that FAIR may minimize, but cannot eliminate or detect. In other words, professionals have a variety of experiences, expertise and assumptions. It is these factors and others that *determine* the input. However, these factors suffer from the "you don't know what you don't know" syndrome and as a result the input provided to FAIR is, unfortunately, subjective. Does FAIR provide a model to prevent and detect this? No, it does not. Does FAIR provide a model that builds in some resistance to this subjectivity? Yes, I believe it does. For some this resistance is not enough because that is still too much subjectivity to draw meaningful inferences from.
Anonymous said…

Let me encourage you to re-read pages 14-27. You can also check the first page of your copy of Andrew's book, where you'll find my personal endorsement of his work. I find nothing in it (and I think Andrew would back me up on this) to suggest that good metrics are antagonistic to FAIR.

I think if you re-read that passage, you might find that you are misinterpreting Andrew's distaste for ALE (which I share) with his hesitancy to discuss metrics for use in modeling.

Note that if you examine "What Makes a Good Metric" and "What Makes a Bad Metric", you'll find that proper use of a Bayesian network meets all of those elements of "good" and none of "bad".

I endorsed Andrew's book for various reasons, but the least altruistic of which is that the better the metrics analysts use in determining FAIR factors, the less noise they will have to account for.
Anonymous said…
Richard -

Whatever inputs are decided on, presumably they reflect the individual's opinions accurately. In that case, if the inputs are garbage, the opinions are garbage as well. Since the qualitative approach relies on the same opinions, the qualitative approach is also garbage.

What providing estimates does, at the very least, is provide a level of scale and magnitude to the discussion, so that your inputs (opinions) can be reviewed, evaluated, discussed, and MADE BETTER. Eventually, they can even lead to objective numbers that have been proven and collected over time (great upside).

What's more, if the modeling is done correctly, they are even testable over time. And here comes the icing: you can more accurately reflect changes in your opinions over time.

A mathematical model can be no worse than a subjective, qualitative risk assessment. The process you must go through itself is enlightening, and if you decide in the end that your qualitative judgement should take precedence, then at least you'll know why.

If there is even one thing that you (or anyone else) think security professionals have wrong (i.e. conventional wisdom that is bad), you should be looking for ways to prove it. Quatitative models provide that opportunity.

Pete Lindstrom
Anonymous said…

Using "Opinion" here is a loaded word. In part of my upcoming response, I (may) write:

note that "opinion" is the wrong word to use. Nether FAIR nor any other Bayesian network tries to say that "Red is 74% better than Green" or "I like the St. Louis Cardinals so they will win this baseball game."

Rather, a Bayesian approach might provide a structured framework for including prior information like "the Cardinals have a .500 record, they do well against left handed pitching, their hitters have a past history of excellence against the other starting pitcher, they are much better at home than on the road, their offense has had an OPS of .875 of late, they have had the best ERA of any NL team since June 15, etc., etc., and then attaches a probability statement to the chances of winning the game.

Where the "subjectivity", "opinion" or "garbage" lie is in the fact that we are using these discreet statements of our experience as prior information for use in a probability statement "will the Cardinals win".
Unknown said…
@ anonymous:

Some Mea Culpa: Andrew does say (p23)
"Metrics can confer credibility when they can be measured in a consistent way. Different people should be able to apply the method to the same data set and come up with equivalent answers. 'Metrics' that depend on the subjective judgments of those ever-so-reliable creatures -humans- are not metrics at all, they are ratings."

Now I'm not entirely in disagreement with Andrew here. The premise (consistent and repeatable) is correct. However, what we're offering is that there is more than one way to achieve that goal. Andrew's solution is "don't use humans" - a Bayesian approach arrives at the same goal using the "subjective" prior information to arrive at a consistent, repeatable statement about probability (or frequencies).

Note that for an approach to be Bayesian by definition it *must* be consistent and repeatable. Richard and I should be able to go into different rooms with the same set of valid priors and come out with the same (or extremely similar) probability statements).

One other thing should be noted about the use of the word "Estimate" -

In many statistical methods the word estimate commonly refers to the end result of an analysis. For instance, the estimate of a pixel's
amplitude in CAT scan image is not the true – but unknown – amplitude. The amplitude estimate is a computed approximation of the pixel's amplitude. It is an estimate because it is the result of the calculation.

Estimates in FAIR are assigned before the risk is computed. In
Bayesian probability theory these distributions are called prior

I will revisit the FAIR Introduction to make sure usage of the word estimate is limited to prior information.
Unknown said…

"However, these factors suffer from the "you don't know what you don't know" syndrome and as a result the input provided to FAIR is, unfortunately, subjective. Does FAIR provide a model to prevent and detect this? No, it does not."

Again, in a Bayesian model, it's not the prevention but the accounting for noise that matters. Uncertainty must be assigned. Now in the "Introduction" we do keep things simple. In commercial application we account for that in the simulations themselves. If you were so inclined to build an open source simulator based on FAIR, I would strongly suggest that your tool would account for uncertainty in a proper manner, as well...
Anonymous said…
If you don't know what you don't know I wonder how you assign uncertainty? Perhaps say that you're 50% uncertain about you're uncertainty on the threat capability? I wonder what "account for uncertainty in a propper manner" means...

"Richard and I should be able to go into different rooms with the same set of valid priors and come out with the same (or extremely similar) probability statements)." - To me this reads like "If you use the same formula with the same inputs, you get the same results". You won't have the same priors because there's no real world data for you to get any decent priors from. Your priors are based on some very limited experience by a single person and not drawn from any significant and statistically sound pool of data. Yes, Richard might have a experience worth more than my grannie or someone who knows little about information security, but Richard and some other security expert? We'll have a FAIR experience evaluation method to decide who's experience to trust? I can assure you that I can ask Richard and some other experts and come up with radically different inputs.

What's frustrating is that there are so many good things to measure, that are usually not measured, and people would spend a bunch of their time with this stuff.

But now I see, the introduction is simple and flawed and you have to buy the commercial application to get the real deal. Now it all clicks together! :) (Pardon the irony)
WHBaker said…
First off - I want to admit that I have not read all threads for this discussion. I was forwarded it by a colleague and am enjoying it so far. However, what I am noticing is that everyone seems to "losing the forest for the trees."

Why are we modeling/measuring/estimating/etc? It's to make a decision (or recommend one) pertaining to information security in a business context. I hear a lot of comments to the effect of "if it can't be perfectly accurate, it isn't useful"...and that, folks, is a flawed view of business decision-making. The art of making a decision with little data is difficult, but it is a well-researched field and humans have been doing it for a long time with great success.

I'm not choosing sides in this discussion...I'm just reminding us why we should be worried about all this in the first place: it's all about the DECISION and what we NEED to make a good one.
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics