Thoughts on Rear Guard Security Podcast

I just listened to the first episode of Marcus Ranum's new podcast Rear Guard Security. A previous commenter got it right; it's like listening to an academic lecture. If that gives you a negative impression, I mean Marcus is a good academic lecturer. These are the sorts of lessons you might buy through The Teaching Company, for example.

Marcus isn't talking about the latest and greatest m4d sk1llz that 31337 d00ds use to 0wn j00. Instead, he's questioning the very fundamentals of digital security and trying to equip the listener with deep understandings of difficult problems. Most vendors will hate what he says and others will think he's far too pessimistic. I think Marcus is largely right because (although he doesn't say this outright) he believes vulnerability-centric security is doomed to failure. (I noticed Matt Franz thinks I may be right, too.) When you realize that nothing you do will ultimately remove all vulnerabilities, you've got to improve our ability to deter, investigate, apprehend, prosecute, and incarcerate threats. (I'll say a little more on this in a future post.)

One area in which I disagree with Marcus is penetration testing. I think he might accept my position if framed properly, since he is a proponent of "science" to the degree we can aspire to that standard. In my post Follow-Up to Donn Parker Story I wrote:

Rather than spending resources measuring risk, I would prefer to see
measurements like the following:

  1. Time for a pen testing team of [low/high] skill with [external/internal] access to obtain unauthorized [stealthy/unstealthy] control of a specified asset using [public/custom] tools and [zero/complete] target knowledge. Note this measurement contains variables affecting the time to successfully compromise the asset.

  2. Time for a target's intrusion detection team to identify said intruder (pen tester), and escalate incident details to the incident response team.

  3. Time for a target's incident response team to contain and remove said intruder, and reconstitute the asset.

These are the operational sorts of problems that matter in the real world.

Yes, I did slightly modify number one to clarify meaning.

In Answering Penetration Testing Questions I added a few more comments, specifically mentioning a source like SensePost Combat Grading as an example of how to rate the [low/high] variable. That's not necessarily the standard I would use (since I haven't seen it) but it shows professional pen testers do think about such issues. (Maybe I can chat with them at Black Hat?)

Marcus defines pen testing as attempting to determine the quality of an unknown quantity using another unknown quantity and a constantly varying set of conditions. In my #1 metric I try to reduce the number of variables such that the unknown qualities are fewer. I don't think it's ever possible to eliminate those variables, because the unit to be tested (the enterprise, usually) is never in a fixed state.

That reflects the real world. The enterprise attacked on Tuesday may not be like the enterprise on Wednesday. As much as I advocate knowing your network I recognize that comprehensive, perfect knowledge, simply due to complexity but aggravated by many other factors, cannot be obtained. However, the same factors which complicate our defense can complicate the intruder's offense. Overall I do not see the problem with finding out how long it takes for a pen testing team operating within my chosen parameters to achieve a specified objective.

This is why I think there's room in Marcus' world for my point of view. I believe there is value in the outcome of these tests. In other words, a single test is worth a thousand theories. I cannot say the number of times I've dealt with security people who refuse to believe a given incident has occurred (i.e., their box is rooted, it had no patches, etc.). Once you show them data, there's no room for excuses.

If it takes 30 minutes for a pen testing team of low skill with external access to obtain unauthorized unstealthy control of a specified asset using public tools and zero target knowledge, there's a problem.

If it takes an estimated 6 months for a pen testing team of high skill with internal access to obtain unauthorized stealthy control of a specified asset using private tools and full target knowledge, the situation is a lot different! (I say "estimated 6 months" because few if any customers are going to hire a pen team for that long. It is possible for pen teams to survey an architecture and estimate how long it would take for them to research, develop, and execute a custom zero-day.)

There is a reason the DoD and DoE staff robust red teams (i.e., pen testers). The report Defense Science Board Task Force on The Role and Status of DoD Red Teaming Activities is very helpful.

Incidentally, I'd rather not be the guy who debates Marcus on this issue if he wants to argue with a "pen tester." I don't do pen tests for a living. If he just wants an opposing point of view, I can probably provide that.


Anonymous said…

We should def. hook up in Vegas.. Lots of our guys have taosecurity in their rss readers, and we can chat about stuff in general / combat-grading in particular :>

btw: after a loong time of only blogging internally, we have opened up the kimono going foward, so you should be able to catch us on

Popular posts from this blog

Five Reasons I Want China Running Its Own Software

Cybersecurity Domains Mind Map

A Brief History of the Internet in Northern Virginia