NSM vs The Cloud

A blog reader posted the following comment to my post Network Security Monitoring Lives:

How do you use NSM to monitor the growing population of remote, intermittently connect mobile computing devices? What happens when those same computers access corporate resource hosted by a 3rd party such as corporate SaaS applications or storage in the cloud?

This is a great question. The good news is we are already facing this problem today. The answer to the question can be found in a few old principles I will describe below.

  • Something is better than nothing. I've written about this elsewhere: computer professionals tend to think in binary terms, i.e., all or nothing. A large number of people I encounter think 'if I can't get it all, I don't want anything." That thinking flies in the face of reality. There are no absolutes in digital security, or analog security for that matter. I already own multiple assets that do not strictly reside on any single network that I control. In my office I see my laptop and Blackberry as two examples.

    Each could indeed have severe problems that started when they were connected to some foreign network, like a hotel or elsewhere. However, when the obtain Internet access in my office, I can watch them. Sure, a really clever intruder could program his malware to be dormant on my systems when I am connected to "home." How often will that be the case? It depends on my adversary, and his deployment model. (Consider malware that never executes on VMs. Hello, malware-proof hosts that only operate on VMs!)

    The point is that my devices spend enough time on a sufficiently monitored network for me to have some sense that I could observe indicators of problems. Of course I may not know what those indicators could be a priori; cue retrospective security analysis.

  • What is the purpose of monitoring? Don't just monitor for the sake of monitoring. What is the goal? If you are trying to identify suspicious or malicious activity to high priority servers, does it make sense to try to watch clients? Perhaps you would be better off monitoring closer to the servers? This is where adversary simulation plays a role. Devise scenarios that emulate activity you expect an opponent to perform. Execute the mission, then see if you caught the red team. If you did not, or if your coverage was less than what you think you need, devise a new resistance and detection strategy.

  • Build visibility in. When you are planning how to use cloud services, build visibility in the requirements. This will not make you popular with the server and network teams that want to migrate to VMs in the sky or MPLS circuits that evade your NSM platforms. However, if you have an enterprise visibility architect, you can build requirements for the sort of data you need from your third parties and cloud providers. This can be a real differentiator for those vendors. Visibility is really a prerequisite for "security," anyway. If you can't tell what's happening to your data in the cloud via visibility, how are you supposed to validate that it is "secure"?


I will say that I am worried about attack and command and control channels that might reside within encrypted, "expected" mechanisms, like updates from the Blackberry server and the like. I deal with that issue by not handling the most sensitive data on my Blackberry. There's nothing novel about that.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Comments

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics