NSM-Friendly VMWare Lab Setup
I'm working on labs for my all-new TCP/IP Weapons School 2.0 class (early registration ends Wednesday). Almost the whole class is labs; I'll have between 10 and 12 scenarios for students to investigate.
As you might imagine, network traffic will play a key role. I wanted to set up a VM running Ubuntu that could watch traffic involving other VMs. (Why not FreeBSD? Ubuntu is easier for students to use, and NSMnow makes it easy to get Sguil running. FreeBSD has also never seemed to run well in VMs due to some weird timing issues that have never been resolved.)
The problem, as I noted in Using VMware for Network Security Monitoring last year, is that modern versions of VMware Server (I run 1.0.8 now) act as switches and not hubs. That means each VM is connected to a virtual switch, effectively sheltered from other traffic. This is good for performance but bad for my monitoring needs.
Monitoring on the VMware server itself is not an option. Although it can see the traffic, I want to distribute a VM to the students that was running and capturing the traffic using Sguil and other tools as necessary.
Incidentally, here are two options for sniffing on the VMware server itself, for reference. VMware mentions its vmnet-sniffer, which is a console-output application with basically no features used only for troubleshooting:
You could just as easily run Tcpdump or any other sniffer of your choice:
One note: vmnet-sniff can watch /dev/vmnet0 even though vmnet0 is not listed by ifconfig. vmnet0 is the bridged interface so you just watch it directly (e.g., eth0, etc.) with Tcpdump.
What to do? I decided that I could deploy the NSM sensor VM as a gateway, and put any hosts which I want to monitor as legs on that gateway. Consider this three-host scenario:
I configured the NSM sensor to be a gateway, and told it to NAT connections outbound to the VMware server NAT interface:
Why this "second" level of NAT via MASQUERADE? It turns out that if you send traffic from, say, 10.1.1.4 through a gateway that doesn't NAT, when that gateway sends the traffic with source IP 10.1.1.4 to the NAT interface on the VMware server, the VMWare server doesn't know how to handle the replies. I saw the traffic exit properly (i.e., it was NATed out), but when the reply arrived the VMware server didn't know how to return it to 10.1.1.4. With this "second" NAT on the NSM sensor / gateway, the VMware server thinks the gateway is originating all traffic, so the hosts can reach the Internet.
With this setup I can now monitor traffic from 10.1.1.4 to 192.168.230.4, because the traffic is routed through the NSM sensor / gateway.
This seems kludgy, and I wish there were a way to just configure VMware Server to act like a hub and have all hosts see all traffic. If anyone knows how to do that, please let me know.
Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.
As you might imagine, network traffic will play a key role. I wanted to set up a VM running Ubuntu that could watch traffic involving other VMs. (Why not FreeBSD? Ubuntu is easier for students to use, and NSMnow makes it easy to get Sguil running. FreeBSD has also never seemed to run well in VMs due to some weird timing issues that have never been resolved.)
The problem, as I noted in Using VMware for Network Security Monitoring last year, is that modern versions of VMware Server (I run 1.0.8 now) act as switches and not hubs. That means each VM is connected to a virtual switch, effectively sheltered from other traffic. This is good for performance but bad for my monitoring needs.
Monitoring on the VMware server itself is not an option. Although it can see the traffic, I want to distribute a VM to the students that was running and capturing the traffic using Sguil and other tools as necessary.
Incidentally, here are two options for sniffing on the VMware server itself, for reference. VMware mentions its vmnet-sniffer, which is a console-output application with basically no features used only for troubleshooting:
richard@neely:~$ sudo vmnet-sniffer -e /dev/vmnet1
len 98 src 00:0c:29:7f:d6:a1 dst 00:0c:29:0a:0f:c1 IP src 10.1.1.3
dst 10.1.1.4 ICMP ping request - len=64 type=8
00:0c:29:7f:d6:a1 08 00 88 e6 c0 17 00 01 ae 85 59 49 b5 2e 07 00 08
09 0a 0b 0c 0d 0e 0f 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f
20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36
37
len 98 src 00:0c:29:0a:0f:c1 dst 00:0c:29:7f:d6:a1 IP src 10.1.1.4
dst 10.1.1.3 ICMP ping reply
You could just as easily run Tcpdump or any other sniffer of your choice:
richard@neely:~$ sudo tcpdump -n -i vmnet1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vmnet1, link-type EN10MB (Ethernet), capture size 96 bytes
20:41:51.272555 IP 10.1.1.3 > 10.1.1.4: ICMP echo request, id 49175, seq 1, length 64
20:41:51.273469 IP 10.1.1.4 > 10.1.1.3: ICMP echo reply, id 49175, seq 1, length 64
One note: vmnet-sniff can watch /dev/vmnet0 even though vmnet0 is not listed by ifconfig. vmnet0 is the bridged interface so you just watch it directly (e.g., eth0, etc.) with Tcpdump.
What to do? I decided that I could deploy the NSM sensor VM as a gateway, and put any hosts which I want to monitor as legs on that gateway. Consider this three-host scenario:
- NSM sensor VM / gateway with 1) eth0 as 172.16.99.3, default gateway is 172.16.99.2, the VMware NAT /dev/vmnet8 gateway; 2) eth1 as 192.168.230.3, on a random subnet; and 3) eth2 as 10.1.1.3, on another random subnet
- Windows victim with interface 192.168.230.4, default gateway 192.168.230.3
- Linux attacker with interface 10.1.1.4, default gateway 10.1.1.3
I configured the NSM sensor to be a gateway, and told it to NAT connections outbound to the VMware server NAT interface:
root@tws-u804:~# echo "1" > /proc/sys/net/ipv4/ip_forward
root@tws-u804:~# iptables -t nat -A POSTROUTING -s 192.168.230.0/24 -o eth0 -j MASQUERADE
root@tws-u804:~# iptables -t nat -A POSTROUTING -s 10.1.1.0/24 -o eth0 -j MASQUERADE
Why this "second" level of NAT via MASQUERADE? It turns out that if you send traffic from, say, 10.1.1.4 through a gateway that doesn't NAT, when that gateway sends the traffic with source IP 10.1.1.4 to the NAT interface on the VMware server, the VMWare server doesn't know how to handle the replies. I saw the traffic exit properly (i.e., it was NATed out), but when the reply arrived the VMware server didn't know how to return it to 10.1.1.4. With this "second" NAT on the NSM sensor / gateway, the VMware server thinks the gateway is originating all traffic, so the hosts can reach the Internet.
With this setup I can now monitor traffic from 10.1.1.4 to 192.168.230.4, because the traffic is routed through the NSM sensor / gateway.
This seems kludgy, and I wish there were a way to just configure VMware Server to act like a hub and have all hosts see all traffic. If anyone knows how to do that, please let me know.
Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.
Comments
You might consider having the VMs use a single NIC that connects to a physical hub, and have your NSM VM use a USB NIC and connect it's sensor port to the hub. USB NICs are fairly cheap and you may find that using that method can help solve other problems too, such as separating host traffic from the VM traffic.
That's an idea, but I don't want to force my students to buy extra hardware.
I have replaced VMWare with VirtualBox OSE and can confirm that the "Internal Network" operates as a hub.
Anonymous, I can't rely on all the students running the same old version of a VMware product.