Marcus Ranum Highlights from USENIX Class
Because I was teaching at USENIX Security this month I didn't get to attend Marcus Ranum's tutorial They Really Are Out to Get You: How to Think About Computer Security. I did manage to read a copy of Marcus' slides.
Because he is one of my Three Wise Men of digital security, I thought I would share some of my favorite excerpts. Some of the material paraphrases his slides to improve readability here.
This is so true. I'd extend the "idiot" paradigm further by adding EDS (Eee-diot Detection System). (Cue "Stimpy, you eee-diot!" if you need pronunciation help here.)
Finally, Marcus slams the idea that one can use an equation to quantify risk. He calls "Risk = Threat X Vulnerability X Asset Value" one wild guess times another wild guess times another wild guess. I agree with this but I would say the concept of separating out those variables helps one understand how Risk changes as one variable changes with the others held constant.
Marcus also offers two approaches to dealing with risk:
Regarding number one, Marcus obviously thinks that is a waste of time. However, one could argue that if policymakers had paid attention to the intelligence that was available and prepared, the situation could have been different. That's where threat intelligence on capabilities and intentions and attack patterns can be helpful for modeling attacks.
Regarding number two, I am so pleased to read this. It's why I'm building a CIRT at my new job. This comment also resonates with something Gadi Evron said during his talk on the "Estonia Cyberwar":
No one is judged anymore by how they prevent incidents. Everyone gets hacked. Instead, organizations are judged by how they detect, respond, and recover.
Because he is one of my Three Wise Men of digital security, I thought I would share some of my favorite excerpts. Some of the material paraphrases his slides to improve readability here.
- Marcus asked how can one make decisions when likelihood of attack, attack consequences, target value, and countermeasure cost are not well understood. His answer helps explain why so many digital security people quote Sun Tzu:
The art of war is a problem domain in which successful practitioners have to make critical decisions in the face of similar intangibles.
I would add that malicious adversaries are also present in war, but not present in certain other scenarios misapplied to security (like car analogies) where intelligent adversaries aren't present. - Marcus continues this thought by contrasting "The Warrior vs The Quant":
Statistics and demographics (i.e., insurance industry analysis of automobile driver performance by group) [fails in digital security] because there is no enemy perturbing the actuarial data... and "perturbing your likelihoods" is what an enemy does! It's "innovation in attack or defense. (emphasis added) - Marcus offers two definitions for security which I may quote in the future:
A system is secure when it behaves as expected; no less and certainly no more.
A system is secure when the amount of trust we place in it matches its trustworthiness. - Marcus debunks the de-perimeterization movement by explaining that a perimeter isn't just a security tool:
A perimeter is a complexity management tool.
In other words, a perimeter is a place where one makes a stand regarding what is and what is not allowed. I've also called that a channel reduction tool. - Here's an incredible insight regarding the many "advanced" inspection and filtering devices that are supposed to be adding "security" by "understanding" more about the network and making blocking decisions:
At a certain point the complexity [of the firewall/filter] makes you just as likely to be insecure as the original application.
He says you're replacing "known bugs" (in the app) with "unknown bugs" (in the "prevention" device). - I love this point:
Insiders and counter-intelligence: What to do about insider threat?- Against professionals: lose
- Against idiots: IDS (Idiot Detection System) works; detect stupidity in action
This is so true. I'd extend the "idiot" paradigm further by adding EDS (Eee-diot Detection System). (Cue "Stimpy, you eee-diot!" if you need pronunciation help here.)
Marcus also offers two approaches to dealing with risk:
- Think of all possible disasters, rank by likelihood, prepare for Top 10. (9/11 showed this doesn't work.
- Build nimble response teams and command/control structures for fast and effective reaction to threats as they materialize.
Regarding number one, Marcus obviously thinks that is a waste of time. However, one could argue that if policymakers had paid attention to the intelligence that was available and prepared, the situation could have been different. That's where threat intelligence on capabilities and intentions and attack patterns can be helpful for modeling attacks.
Regarding number two, I am so pleased to read this. It's why I'm building a CIRT at my new job. This comment also resonates with something Gadi Evron said during his talk on the "Estonia Cyberwar":
No one is judged anymore by how they prevent incidents. Everyone gets hacked. Instead, organizations are judged by how they detect, respond, and recover.
Comments
This is a definition of reliability, not security. Imagine a system that has been compromised with virtually no footprints or system resource impact.
"A system is secure when the amount of trust we place in it matches its trustworthiness."
I'm not sure this definition is truly one of security. Trust = secure? I don't trust this thing at all. Since it behaves that way, then it's secure? This, to me, sounds like another definition of reliability (a superset of security).
Sweet!
First, if you're going to pick on risk, you need to do use something better than that equation. Citing it as an example of real risk expression in this day and age is, at best, lazy.
Second, if you're making wild guesses, then yes, you should be focusing on "good practices", security by compliance and governance, or whatever else. Good luck with that, though, because essentially you'll still be making "wild guess" risk decisions, you just won't be acknowledging them as such.
"Marcus also offers two approaches to dealing with risk:
1.) Think of all possible disasters, rank by likelihood, prepare for Top 10. (9/11 showed this doesn't work.
2.) Build nimble response teams and command/control structures for fast and effective reaction to threats as they materialize."
The second, of course, is simply a crude way of doing the first - and therefore just as likely to not work by that same logic.
BOTTOM LINE
These arguments are simply a rewording of what Donn Parker has already suggested. They fail the reality test for the same reason Mr. Parker's do:
Risk determination is an inescapable act on the part of the practitioner. There is no question that you will be making risk decisions (probability of event x probable impact of event) - the only question is with what rigor will you be making those decisions.
But simply picking bad models, claiming they don't work, and therefore telling people to go with their own ad-hoc "gut" model isn't wisdom or even advice, it's just rejecting scientific approach in favor of an shamanistic one.
Then you and I are going to have to agree to disagree. I don't see the capabilities or frequencies of aggregate threat communities acting in a significantly different manner than any other general population.
As Jaynes comments - we're not sure why, but the world works in a Gaussian way - so why not take his advice - unless there's really good reason no to (like using a Jeffries to introduce noise) go that route.
You said:
"I don't see the capabilities or frequencies of aggregate threat communities acting in a significantly different manner than any other general population."
Even if this were true (which I doubt), the vulnerability landscape for digital security changes on a monthly, if not weekly basis. So does the asset side. That is unlike anything else I can imagine.
RE: Vuln Landscape: I would offer that our ability to resist the force applied by a threat agent is one of the easiest priors for us to gather.
RE: Asset Management: what changes to your asset pool are so significant that you cannot keep up with them?
In both cases - I would offer that if you believe you have non-informative priors for these factors there are dozens of vendors out there that would help you build timely priors for use in a probabilistic model.
Heck, I'll even offer the real weakness in probabilistic risk modeling up for you to discuss possible solutions for: When you think about all the factors that contribute to risk, the determination of frequencies for most probable threat community is a much larger issue to overcome.
Either way, this is all very off topic from the problems I have with what I think Marcus was insinuating here (who I do have the utmost respect for and write these comments in hopes that he would invest time in probability theory instead of becoming a rather pessimistic Donn Parker clone).
For example, you recently were charged with running an IRT team, correct?
Now lets say you had to hire your team from scratch - how would you determine who to hire, what skills and resources were needed in applicants? If you were using anything other than risk (a derived value consisting of: probable frequency of loss event and probable magnitude of loss) then how could you expect to be at all rational in your hiring practices?
You couldn't, of course! You have to prioritize the skills and resources you sought based on the most probable source of risk - the same thing Marcus argues against doing directly above his advice to build a nimble response team.
The argument Marcus may or may not realize he's making here - and I'm probably guilty of carrying his words out to a logical extreme - is that you could somehow operate without making risk decisions.
In fact, the irony of Mr. Parker's statement that we should just follow "good practices" is that what you're really doing is:
1.) Simply following the codification of someone else's risk tolerance/expectancy
2.) Trying to transfer the risk of being wrong to some more static (paternalistic?) prescription for risk management.
Bottom line, it's inescapable. So we are presented with two choices:
a.) take a scientific approach. Build models. Test them using scientific method.
b.) take a faith-based approach.
I'd rather choose A, be more accurate and not worry so much about precision, than B - be as precise as I can be (and obtaining a false sense of security) without caring if I'm accurate.
One last thing, and I'm not trying to be snarky here - if "adversaries distort any kind of actuarial approach" then how can insurance companies stay in business? Be they criminals or Force Majeure - the whole purpose of Insurance is to apply an actuarial approach to adversaries, isn't it?
In regards to risk and the first approach about addressing the top 10 and your point about listening to the intelligence...I just don't think that approach can ever win, especially if you put lots of rigor behind it. There will always be that #11 event that happens and someone will wave intelligence around saying they predicted it, just like they predicted a cinderella in the latest Final Four; a sort of retrospective/hindsight winner every time. Unless intelligence can be quantified and ranked properly, that approach, while good for what gets ranked top 10 (or top x), will always be second-guessed and second-guessable. I hate that sort of position unless taken with risk management as a whole, which can be far more tolerant to outliers like that odd #11. For instance, I'm not Bush fan, but I won't ever blame him for missing 9/11. We (the public) would have thought him a paranoid nut for spending time and money preventing the attack...and if he were successful, we'd never have proof he prevented anything.
I also appreciate the second option, and find it to be the goal even in a smaller company where the response team is, well, one. Being a nimble responder or team is only part of it, but having an organization that can support and allow nimble response is also important and should be part of the approach. I like what Alex said about how #2 is a crude approximation of #1, but I think it helps a lot by putting less rigor behind the ranking and risk and more behind being prepared for most anything and being a little more prepared and exercised in those top 10.