Brief Response to Marty's Post

Marty Roesch was kind enough to respond to my recent posts on NSM. We shared a few thoughts in IRC just now, but I thought I would post a few brief ideas here.

My primary concern is this: just because you can't collect full content, session, statistical, and alert data everywhere doesn't mean you should avoid collecting it anywhere. I may not have sensors on the sorts of network Marty describes (high bandwidth, core networks) but I have had (and have) sensors elsewhere that did (and do) support storing decent amounts of NSM data on commodity hardware using open source software. I bet you do too.

I'm not advocating you store full content on the link to your storage area network. I don't expect Sony to store full content of 8 Gbps of traffic entering their gaming servers. I don't advocate storing full content in the core. Shoot, I probably wouldn't try storing session data in the core. Rather, you should develop attack models for the sorts of incidents that worry you and develop monitoring strategies that best meet those needs given your resource constraints.

For example, almost everyone can afford to monitor almost all forms of NSM data at the point where their users exit the intranet and join the Internet. I seldom see these sorts of access links carrying the loads typically thought to cause problems for commodity hardware and software. (ISPs, this does not include you!) This is the place where you can perform extrusion detection by watching for suspicious outbound connections. If you follow defensible network architecture principles, you can augment your monitoring by directing all outbound HTTP (and other supported protocols) through a proxy. You can inspect those proxy logs instead of reviewing NSM data, if you have access.

Marty also emphasizes the problems caused by centralizing NSM data. I do not think centralization is a key aspect, or necessarily a required aspect, of NSM. One of my clients has three sensors. None of them report to a central point. All of them run their own sensor, server, and database components.

The existing Sguil architecture centrally stores alert and session data. Full content data remains on the sensor and is periodically overwritten. I am personally in favor of giving operators the option of storing session data on a database local to the sensor. That would significantly reduce the problems of centralization. I almost never "tune" of "filter" statistical or full content data. I seldom "tune" of "filter" session data, but I always tune alert data. By keeping the session data on the sensor, you can collect records of everything the sensor sees but not waste bandwidth pushing all that information to a central store.

Marty also said this:

Then we've got training. I know what the binary language of moisture vaperators, Rich knows the binary language of moisture vaperators, lots of Sguil users know it too. The majority of people who deploy these technologies do not. Giving them a complete session log of an FTP transfer is within their conceptual grasp, giving them a fully decoded DCERPC session is probably not. Who is going to make use of this data effectively? My personal feeling is that more of the analysis needs to be automated, but that's another topic.

Excellent Star Wars comment. I don't like the alternative, though. As I described here, I'm consulting for a client stuck with a security system they don't understand and for which they don't have the data required to acquire real knowledge of their network. I don't understand how providing less information is supposed to help this situation. As I wrote in Hawke vs the Machine, expertise grows from having the right forms of data available. In other words, it's the data that makes the expert. I don't have any special insights into alerts from an IDS or IPS. I can make sense of them only through investigation, and that requires data to investigate.

Recording everything, everywhere will never scale and isn't feasible. However, the revolution will be monitored, which will help us understand our networks better and hopefully detect and eject more intruders.

Comments

Anonymous said…
In the end it all comes down to resources. You need the resources to setup your collection architecture and maintain them. Then you need the resources to review the data you have collected. In some organizations unfortunately, these are not top priority, so they are staffed accordingly. I've seen many people implement security technologies only to use them if something bad happens, not to understand their network and proactively look for anomalies.
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4