NSM vs Encrypted Traffic Revisited

My last post What Would Galileo Think was originally the first part of this post, but I decided to let it stand on its own. This post is now a follow on to NSM vs Encrypted Traffic, Plus Virtualization and Snort Report 16 Posted. I received several questions, which I thought deserved a new post. I'm going to answer the first with Galileo in mind.

LonerVamp asked:

So can I infer that you would prefer to MITM encrypted channels where you can, so to inspect that traffic on the wire? :)

On a related note, Ivan Ristic asked:

Richard, how come you are not mentioning passive SSL decryption as an option?

I thought I had answered those questions when I said:

If you loosen your trust boundary, maybe you monitor at the perimeter. If you permit encrypted traffic out of the perimeter, you need to man-in-the-middle the traffic with a SSL accelerator. If you trust the endpoints outside the perimeter, you don't need to.

Let's reconsider that statement with Galileo in mind. Originally I proposed that those who terminate their trust boundary at their perimeter must find a way to penetrate encrypted traffic traversing that trust boundary. Another way to approach that problem is to perform measurements to try to determine what cost and benefit can be gained by terminating SSL at the perimeter, inspecting clear text, and re-encrypting traffic as it leaves the enterprise. Does that process actually result in identify and/or limiting intrusions? If yes, use the results to justify the action. If not, abandon the plan or decide to conduct a second round of measurements if conditions are deemed to change at a later date. Don't just argue that "I need to see through SSL" because it's a philosophical standpoint.

Marcin asked:

So what do you say and do when your NSM Sensor/SSL Load Balancer/SSL Proxy gets compromised, exposing your most sensitive data (by nature, because it is being encrypted)?

Am I supposed to rely on my IDS' and my own ability to detect 0day attacks against hardened hosts?


To answer the first question, I would say check out my TaoSecurity Enterprise Trust Pyramid. The same factors which make data from sensors more reliable also make those sensors more resilient. However, no sensor is immune from compromise, and I recommend taking steps to monitor and contain the sensor itself in a manner appropriate for the level of traffic it inspects. Keep in mind a sensor is not a SSL proxy. The SSL proxy might only log URLs; it might not provide clear text to a separate sensor.

Answering the second question could take a whole book. Identifying "0day attacks," what I call "first order detection," is increasingly difficult. Performing second order detection, meaning identifying reinforcement, consolidation, and pillage is often more plausible, especially using extrusion detection methods. Performing third order detection, meaning discovering indications of your hosts in someone's botnet or similar unauthorized control, is another technique. Finally, fourth order detection, or seeing your intellectual property in places where it should not be, is a means to discover intrusions.

Vivek Rajan asked:

Daemonlogger is cool, but what do you think about more sophisticated approaches like the Time Machine ? ( http://www.net.t-labs.tu-berlin.de/research/tm/ )

Is there some value in retaining full content of long running (possibly encrypted) sessions?


I don't consider Time Machine "more sophisticated." It's just a question of trade-offs. Where possible I prefer to log everything, because you can never really be sure before an incident just what might be important later. Regarding encryption, what if you disable collecting traffic on port 443 TCP outbound because it's supposed to be SSL, when you later learn that an intruder is using some weak obfuscation method or no encryption at all?

To summarize, implement whatever system you select based on the demonstrable improvement it brings to your security posture, not because you think it is helpful. I am particularly critical when it comes to defensive measures. For measures that improve visibility, my objective is to gather additional data with a benefit that outweighs the costs of collection.

Comments

dre said…
ssldump can't handle EDH
dre said…
Wow, I just read, "Tales from the Crypt: fingerprinting attacks on encrypted channels by way of retainting", which apparently discusses detecting zero-day encrypted attacks. Who would have known?

There's also a follow-up paper which discusses implementation, "D2.4: Enhanced NoAH implementation and optimizations" from the European Network of Affined Honeypots.

I found, "STILL: Exploit Code Detection via Static Taint and Initialization Analyses" not too difficult of a read compared to other recent publications on this subject.
One comment from me -- I neglected to specifically comment on differences between terminating SSL and providing private keys to the sensor so it can decrypt SSL. The problem with the second approach is that it's really only useful for monitoring inbound traffic to your servers, whereas I'm more concerned with watching outbound traffic to someone else's servers. The passive decryption approach is still an option, but you might consider reviewing logs or using a server-based monitoring approach if you want to see traffic to servers you control.
Anonymous said…
by the way, what is SSL sir?
DavidJBianco said…
BTW, I do think Time Machine is more advanced than the methods Sguil typically uses. I have blogged a bit about Sguil's packet capture here and here. Although I didn't talk about Time Machine, I have evaluated it and found it to be quite nice. The cut-off feature for larger sessions is configurable, and could be deployed in many different ways (e.g., only between trusted systems) or not at all.

The "advanced" part really comes into play with capture and retrieval performance. Time Machine tries very hard to cache as much as possible in RAM, and it's use of indices to speed up retrieval is very nice.

I know it's a little off-topic here, but I just wanted to point out that you can use Time Machine as an advanced packet capture system without sacrificing forensic viability of the data.
Seth Hall said…
David, thanks for writing that clarification, I was just about to do the same. :)

Another advantage to the Time Machine is the ability for external tools be to able to talk to it. In Bro, they're implementing a feature in which Bro might notice some interesting activity and it could automatically put out a request to your Time Machine installation to grab all of the traffic relating to the host that raised the interesting activity. Essentially, it helps in creating investigation bundles.
Unknown said…
Richard, thanks for the response. I think I just got bogged down in the original post. I gave it a few reads, but just kinda wanted the quick answer. Now that you pulled that quote out, it of course is clear as day and I'm smacking my forehead. :)

Makes sense!

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4