Looking back at previous blogs, notably
I see that you have, in your classes, used VM's to run your network monitoring tools from. Have you progressed this idea into a production environment or do you still feel that running tools in this configuration, be they on a Linux host or not, would still be too much of a compromise.
The scenario i work under means that i cannot have my sensors connected to the Internet which makes keeping them upto date difficult and i was looking at creating a generic VM that i could keep live and up to date on an Internet facing terminal and then copy to my production environment as and when i wish to deploy new verions of tools or updated signatures.
Grateful for your thoughts
The reader is correct; whenever I deploy NSM platforms as VMs, it's only for demo or class purposes. I do not use NSM platforms as VMs in production, where my "NSM VM" might be deployed alongside other production VMs. I make the following arguments.
- Most NSM platforms I deploy require at least 3 NICs, and possibly 5. My VMs actively use 1 NIC for management via SSH, 2 NICs for connection to a traditional network tap, and potentially 2 more NICs for tapping a backup or high-available link paired with the first tap. Most VM server platforms are designed to share a small number of NICs among many virtual machines for management and serving content. The NSM in VM concept stretches or breaks that model.
- NICs on NSM platforms are not performing periodic data transmission or reception; they are usually always receiving data. The monitoring NICs on a NSM platform are used to sniff network traffic. On anything but the most trivial network, these NICs are constantly listening for network traffic. This also stretches, if not breaks, the traditional VM model.
- Hard drives on NSM platforms are constantly writing, incurring high storage I/O costs. NSM platform are constantly writing full content Pcap data to disk, in addition to session records, IDS alerts, and other data. The full content collection requirement alone strains the idea of having a NSM VM writing data while other VMs on the same physical hardware compete for resources.
- Some NSM applications are RAM- and CPU-intensive. Snort in 2009 is not the same as Snort in 1999. If you've looked at the resources consumed by Snort 2.x, you'll see a large footprint. NSM platforms also run session data collection tools (like SANCP), other alert data tools (maybe Bro), and other tools, so they are already doing their own part to share resources among many competing interests.
- If you consider why virtualization is attractive, you'll see virtualizating a NSM appliance doesn't fit the model. Virtualization is supposed to be helpful for making more efficient use of underutilized hardware. The classic case involves a rack of servers each using a fraction of their resources. Why not virtualize those servers and reduce your overall computing footprint? That model makes sense. It does not make sense to take a platform which is already monopolizing its available resources and shoehorn it into a virtual server.
There are obvious benefits to virtualization in general, such as easy creation/destruction of VMs, the ability to snapshot/introduce a change/roll back, and so on. At this point, those advantages do not compensate for the disadvantages I outlined.
Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.