Because he is one of my Three Wise Men of digital security, I thought I would share some of my favorite excerpts. Some of the material paraphrases his slides to improve readability here.
- Marcus asked how can one make decisions when likelihood of attack, attack consequences, target value, and countermeasure cost are not well understood. His answer helps explain why so many digital security people quote Sun Tzu:
The art of war is a problem domain in which successful practitioners have to make critical decisions in the face of similar intangibles.
I would add that malicious adversaries are also present in war, but not present in certain other scenarios misapplied to security (like car analogies) where intelligent adversaries aren't present.
- Marcus continues this thought by contrasting "The Warrior vs The Quant":
Statistics and demographics (i.e., insurance industry analysis of automobile driver performance by group) [fails in digital security] because there is no enemy perturbing the actuarial data... and "perturbing your likelihoods" is what an enemy does! It's "innovation in attack or defense. (emphasis added)
- Marcus offers two definitions for security which I may quote in the future:
A system is secure when it behaves as expected; no less and certainly no more.
A system is secure when the amount of trust we place in it matches its trustworthiness.
- Marcus debunks the de-perimeterization movement by explaining that a perimeter isn't just a security tool:
A perimeter is a complexity management tool.
In other words, a perimeter is a place where one makes a stand regarding what is and what is not allowed. I've also called that a channel reduction tool.
- Here's an incredible insight regarding the many "advanced" inspection and filtering devices that are supposed to be adding "security" by "understanding" more about the network and making blocking decisions:
At a certain point the complexity [of the firewall/filter] makes you just as likely to be insecure as the original application.
He says you're replacing "known bugs" (in the app) with "unknown bugs" (in the "prevention" device).
- I love this point:
Insiders and counter-intelligence: What to do about insider threat?
- Against professionals: lose
- Against idiots: IDS (Idiot Detection System) works; detect stupidity in action
This is so true. I'd extend the "idiot" paradigm further by adding EDS (Eee-diot Detection System). (Cue "Stimpy, you eee-diot!" if you need pronunciation help here.)
Marcus also offers two approaches to dealing with risk:
- Think of all possible disasters, rank by likelihood, prepare for Top 10. (9/11 showed this doesn't work.
- Build nimble response teams and command/control structures for fast and effective reaction to threats as they materialize.
Regarding number one, Marcus obviously thinks that is a waste of time. However, one could argue that if policymakers had paid attention to the intelligence that was available and prepared, the situation could have been different. That's where threat intelligence on capabilities and intentions and attack patterns can be helpful for modeling attacks.
Regarding number two, I am so pleased to read this. It's why I'm building a CIRT at my new job. This comment also resonates with something Gadi Evron said during his talk on the "Estonia Cyberwar":
No one is judged anymore by how they prevent incidents. Everyone gets hacked. Instead, organizations are judged by how they detect, respond, and recover.