Friday, January 30, 2009

Virtualized Network Security Monitoring Platforms

Yesterday a blog reader asked:

Looking back at previous blogs, notably

http://taosecurity.blogspot.com/2005/12/network-monitoring-platforms-on-vmware.html

I see that you have, in your classes, used VM's to run your network monitoring tools from. Have you progressed this idea into a production environment or do you still feel that running tools in this configuration, be they on a Linux host or not, would still be too much of a compromise.

The scenario i work under means that i cannot have my sensors connected to the Internet which makes keeping them upto date difficult and i was looking at creating a generic VM that i could keep live and up to date on an Internet facing terminal and then copy to my production environment as and when i wish to deploy new verions of tools or updated signatures.

Grateful for your thoughts


The reader is correct; whenever I deploy NSM platforms as VMs, it's only for demo or class purposes. I do not use NSM platforms as VMs in production, where my "NSM VM" might be deployed alongside other production VMs. I make the following arguments.

  1. Most NSM platforms I deploy require at least 3 NICs, and possibly 5. My VMs actively use 1 NIC for management via SSH, 2 NICs for connection to a traditional network tap, and potentially 2 more NICs for tapping a backup or high-available link paired with the first tap. Most VM server platforms are designed to share a small number of NICs among many virtual machines for management and serving content. The NSM in VM concept stretches or breaks that model.

  2. NICs on NSM platforms are not performing periodic data transmission or reception; they are usually always receiving data. The monitoring NICs on a NSM platform are used to sniff network traffic. On anything but the most trivial network, these NICs are constantly listening for network traffic. This also stretches, if not breaks, the traditional VM model.

  3. Hard drives on NSM platforms are constantly writing, incurring high storage I/O costs. NSM platform are constantly writing full content Pcap data to disk, in addition to session records, IDS alerts, and other data. The full content collection requirement alone strains the idea of having a NSM VM writing data while other VMs on the same physical hardware compete for resources.

  4. Some NSM applications are RAM- and CPU-intensive. Snort in 2009 is not the same as Snort in 1999. If you've looked at the resources consumed by Snort 2.x, you'll see a large footprint. NSM platforms also run session data collection tools (like SANCP), other alert data tools (maybe Bro), and other tools, so they are already doing their own part to share resources among many competing interests.

  5. If you consider why virtualization is attractive, you'll see virtualizating a NSM appliance doesn't fit the model. Virtualization is supposed to be helpful for making more efficient use of underutilized hardware. The classic case involves a rack of servers each using a fraction of their resources. Why not virtualize those servers and reduce your overall computing footprint? That model makes sense. It does not make sense to take a platform which is already monopolizing its available resources and shoehorn it into a virtual server.


There are obvious benefits to virtualization in general, such as easy creation/destruction of VMs, the ability to snapshot/introduce a change/roll back, and so on. At this point, those advantages do not compensate for the disadvantages I outlined.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

12 comments:

Christofer Hoff said...
This comment has been removed by the author.
Christofer Hoff said...

For some reason the beginning of my comment was lopped off:

Here's what I said:

These are all valid and interesting points -- for now.

The reality is that there is a sea change coming in some of the underlying networking and security capabilities of (at least) VMware and how that relates to the limitations of currently deploying sensors.

I know I'm really drifting form the "virtualizing sensors" discussion point and talking more about "sensors virtualized," but I think they are closely related in the long term.

Case in point: The Cisco Nexus 1000v, VNLink and distributed virtual switching when combined with the VMsafe API's mean that the issues associated with the traditional plumbing-in of monitoring solutions changes dramatically.

This evolves even further when you consider the integration of the Nexus 5000 and the initiator.

The point here is that the capabilities to set filters/triggers across a virtual switching fabric (both in software and hardware) will ultimately simplify and expand the monitoring footprint. It will also potentially lead to folks using more virtual appliances in unique ways.

The definition of a sensor at the atomic level will change as will the notion of how we scale them to provide for the capacity that will be needed.

This doesn't obviate physical, dedicated sensors entirely, but it solves many of the problems associated with trying to integrate them into consolidating and virtualized infrastructure.

These are certainly VMware-centric solutions, but that's exactly why I keep jumping up and down whining about the lack of comparable technologies in the other virtualization platforms. There are some moves afoot with Xen, but we'll see how that moves forward.

We can't currently get the visibility parity in virtual environments as we do in physical due to the performance, scale and integration issues you and I talk about, but it's coming.

My $0.02

/Hoff

Anonymous said...

two years ago i wrote in an internal statement something like "...virtualization interduces a layer of complexity we should not underesitmate.... we need knowledge, experiance and tools to make periodic integrity-checks... we do not have the low-level skill of joanna & co nor the time to develop them... so dear boss, while we evaluate the savings in hardware and energy, lets not forget the follow-up costs..."
over two years have past since then and things have evolved... but i still miss the tools for integrity checks and can't afford the time for training or experianced personal.
i am therefore extremly careful with virtualization ... but maybe i take the learned lessons in 'security engeneering' too serious...? what do you think?

Chris Buechler said...

You're looking at things from the huge corporation perspective, Richard. And from a hosted hypervisor perspective as well, as I believe last you posted you stated you hadn't used ESX or similar competitive offerings. Addressing your points individually, mostly based on my experience with VMware ESX:

1) You can put a whole lot more than 3-5 physical NICs into a single ESX box, and you don't have to share them. A typical 2U server with two PCI-X/PCI-e slots can easily have 10 NICs with two 4 port cards and the usual two onboard cards. These types of things work very well in ESX.

2. True, but still works fine. I do it routinely.

3. True, but how much? This will differ widely from one scenario to another.

4. Yes, but again how much? Also varies greatly.

5. Not universally true at all.

Points 3-5 are all dependent on the amount of traffic you're monitoring. If you're at a Fortune 100 company monitoring gig taps with high load running over them, then yes, virtualization likely doesn't make sense. If you're a small to mid sized company monitoring a T1 or maybe a 10 Mb Internet or WAN pipe, then it can make sense in some circumstances. You aren't going to be using a considerable amount of resources monitoring that little bit of traffic, and it nullifies all these points.

Whether or not you trust virtualization in general, as "Anonymous" posted about, is another discussion entirely. My hands on experience is such that for most things, I trust it.

If you're monitoring a lot of traffic (50-100 Mbps or more, though that number will vary between scenarios), I agree entirely with these points. If you aren't, don't write off virtualization so quickly. It still may not make sense depending on the specifics of your environment and your virtualization deployment, but in some instances your options will be either add it to the existing virtualization platform or don't do it at all. NSM can and does work in virtualized environments.

Chris Buechler said...

Oh, and one additional thought I left out - my previous comments were strictly related to monitoring physical links with a virtual machine. What about monitoring traffic between VMs that never leaves the host and touches the physical wire?

As Christofer Hoff noted, things are changing considerably related to this. For now, NSM inside a VM may be the only way, or the best way, to monitor traffic between VMs depending on the specifics of your deployment.

Andre Gironda said...

I'm going to agree with The Hoff. VNET and hypervisor introspection (e.g. VMware's VMsafe API or XenAccess.sf.net) completely changes this layer. This is happening, oh, right about now sometime. Sourcefire, Reflex Systems, and Stonesoft are going to be key monitoring/visibility players because they are in the VMware TAP, third-party access partner program.

@ Buechler: Isn't the recommended ESX configuration 4 or 6 NICs? Personally, I'd go with 4 (VMkernel, no NAS or iSCSI) or 6 (VMkernel with NAS and/or iSCSI) as it makes the architecture much easier. I've got some great reference architectures for everyone to check out at some point. Hoff has seen them.

I think I've said this numerous times to Bejtlich and Hoff: rpcapd compiles under Unix and Windows (I think I learned this from TToNSM). You can wrap it in stunnel if you'd like. This is going to work with vSwitch and VNET technology, although it's not going to take advantage of the hypervisor introspection layer. I would certainly rather run this than the promiscuous mode available in the VMware VIC vSwitch Security Properties tab.

Chris Buechler said...

There isn't really a recommended number of NICs for ESX. It varies quite a bit depending on your needs and specifics of your deployment. I like one for management only, at least one for VMs, and one for storage if using iSCSI. Or multiple bonded NICs for each of those purposes. Some people run all of the above on one NIC, though I would certainly consider that inadvisable.

network monitoring said...
This comment has been removed by a blog administrator.
kiss said...
This comment has been removed by a blog administrator.
wow gold said...
This comment has been removed by a blog administrator.
wow gold said...
This comment has been removed by a blog administrator.
Shruthi said...

Great informative Article on Virtualization. Thanks for sharing it here. By the way I have gathered more information on Virtualization through the conference Cloudslam 2010 which is the 2nd annual and virtual conference on Cloud Computing and its innovations. I got a good chance to meet and talk with the world's leading experts of Cloud Computing through the conference.