NSM vs Encrypted Traffic, Plus Virtualization
A blog reader sent me the following question, and prequalified me to post it anonymously.
For reasons of security and compliance, more and more network connections are becoming encrypted. SSL and SSH traffic are on the rise inside our network. As we pat ourselves on the back for this, the elephant in the room stares at me...how are we going to monitor this traffic? It made me wonder if the future of security monitoring will shift to the host. It appears that the host, provided some centrally managed IDS is installed, would inspect the unencrypted traffic and report back to a HSM (host security monitoring) console. Of course, that requires software (ie an agent) on all of our hosts and jeopardizes the trust we have in our NSMs, because "the network doesn't lie".
This is an excellent, common, and difficult question. I believe the answer lies in defining trust boundaries. I've been thinking about this in relation to virtualization. As many of you have probably considered, really nothing about virtualization is new. Once upon a time computers could only run one program at a time for one user. Then programmers added the ability to run multiple programs at one time, fooling each application into thinking that it had individual use of the computer. Soon we had the ability to log multiple users into one computer, fooling each user into thinking he or she had individual use. Now with virtualization, we're convincing applications or even entire operating systems that they have the attention of the computer.
What does this have to do with NSM? This is where trust boundaries are important. On a single user, multi-application computer, should each app trust the other? On a multi-user, multi-app computer, should each user trust each other? On a multi-OS computer, should each OS trust each other?
If you answer no to these questions, you assume the need for protection mechanisms. Since prevention eventually fails, you now need mechanisms to monitor for exploitation. The decision where to apply trust boundaries dictates where you place those mechanisms. Do you monitor system calls? Inter-process communication? Traffic between virtual machines on the same physical box? What about traffic in a cluster of systems, or distributed computing in general?
Coming back to the encryption question, you can consider those channels to be like those at any of the earlier levels. If you draw your trust boundary tight enough, you do need a way to monitor encrypted traffic between internal hosts. Your trust boundary has been drawn at the individual host level, perhaps.
If you loosen your trust boundary, maybe you monitor at the perimeter. If you permit encrypted traffic out of the perimeter, you need to man-in-the-middle the traffic with a SSL accelerator. If you trust the endpoints outside the perimeter, you don't need to. People who don't monitor anything implicitly trust everyone, and as a result get and stay owned.
I do think it is important to instrument whatever you can, and that includes the host. However, I don't think the host should be the final word on assessing its own integrity. An outside check is required, and the network can be a place to do that.
By the way, this is the best method to get an answer from me if you send a question by email. I do not answer questions of a "consulting" nature privately -- I either post the answer here or not at all. Thanks for the good question JS.
For reasons of security and compliance, more and more network connections are becoming encrypted. SSL and SSH traffic are on the rise inside our network. As we pat ourselves on the back for this, the elephant in the room stares at me...how are we going to monitor this traffic? It made me wonder if the future of security monitoring will shift to the host. It appears that the host, provided some centrally managed IDS is installed, would inspect the unencrypted traffic and report back to a HSM (host security monitoring) console. Of course, that requires software (ie an agent) on all of our hosts and jeopardizes the trust we have in our NSMs, because "the network doesn't lie".
This is an excellent, common, and difficult question. I believe the answer lies in defining trust boundaries. I've been thinking about this in relation to virtualization. As many of you have probably considered, really nothing about virtualization is new. Once upon a time computers could only run one program at a time for one user. Then programmers added the ability to run multiple programs at one time, fooling each application into thinking that it had individual use of the computer. Soon we had the ability to log multiple users into one computer, fooling each user into thinking he or she had individual use. Now with virtualization, we're convincing applications or even entire operating systems that they have the attention of the computer.
What does this have to do with NSM? This is where trust boundaries are important. On a single user, multi-application computer, should each app trust the other? On a multi-user, multi-app computer, should each user trust each other? On a multi-OS computer, should each OS trust each other?
If you answer no to these questions, you assume the need for protection mechanisms. Since prevention eventually fails, you now need mechanisms to monitor for exploitation. The decision where to apply trust boundaries dictates where you place those mechanisms. Do you monitor system calls? Inter-process communication? Traffic between virtual machines on the same physical box? What about traffic in a cluster of systems, or distributed computing in general?
Coming back to the encryption question, you can consider those channels to be like those at any of the earlier levels. If you draw your trust boundary tight enough, you do need a way to monitor encrypted traffic between internal hosts. Your trust boundary has been drawn at the individual host level, perhaps.
If you loosen your trust boundary, maybe you monitor at the perimeter. If you permit encrypted traffic out of the perimeter, you need to man-in-the-middle the traffic with a SSL accelerator. If you trust the endpoints outside the perimeter, you don't need to. People who don't monitor anything implicitly trust everyone, and as a result get and stay owned.
I do think it is important to instrument whatever you can, and that includes the host. However, I don't think the host should be the final word on assessing its own integrity. An outside check is required, and the network can be a place to do that.
By the way, this is the best method to get an answer from me if you send a question by email. I do not answer questions of a "consulting" nature privately -- I either post the answer here or not at all. Thanks for the good question JS.
Comments
Am I supposed to rely on my IDS' and my own ability to detect 0day attacks against hardened hosts?
I'm playing devil's advocate here, but I think we can agree that there definitely needs more research and work done to address this problem.