Distributed Traffic Collection with Pf Dup-To

The following is another excerpt from my upcoming book titled Extrusion Detection: Security Monitoring for Internal Intrusions. I learned yesterday that it should be available the last week in November, around the 26th.

We’ve seen network taps that make copies of traffic for use by multiple monitoring systems. These copies are all exactly the same, however. There is no way using the taps just described to send port 80 TCP traffic to one sensor, and all other traffic to another sensor. Commercial solutions like the Top Layer IDS Balancer provide the capability to sit inline and copy traffic to specified output interfaces, based on rules defined by an administrator. Is there a way to perform a similar function using commodity hardware? Of course!

The Pf firewall introduced in Chapter 2 offers the dup-to keyword. This function allows us to take traffic that matches a Pf rule and copy it to a specified interface. Figure 4-17 demonstrates the simplest deployment of this sort of system.

Figure 4-17. Simple Pf Dup-To Deployment

First we must build a Pf bridge to pass and copy traffic. Here is the /etc/pf.conf.dup-to file we will use.

int_if="sf0"
ext_if="sf1"
l80_if="sf2"
l80_ad="1.1.1.80"
lot_if="sf3"
lot_ad="2.2.2.2"

pass out on $int_if dup-to ($l80_if $l80_ad) proto tcp from any
port 80 to any
pass in on $int_if dup-to ($l80_if $l80_ad) proto tcp from any
to any port 80

pass out on $int_if dup-to ($lot_if $lot_ad) proto tcp from any
port !=80 to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto tcp from any
to any port !=80

pass out on $int_if dup-to ($lot_if $lot_ad) proto udp from
any to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto udp from
any to any

pass out on $int_if dup-to ($lot_if $lot_ad) proto icmp from
any to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto icmp from
any to any

To understand this configuration file, we should add some implementation details to our simple Pf dup-to diagram. Figure 4-18 adds those details.

Figure 4-18. Simple Pf Dup-To Implementation Details

First consider the interfaces involved on the Pf bridge:

  • Interface sf0 is closest to the intranet. It is completely passive, with no IP address.

  • Interface sf1 is closest to the Internet. It is also completely passive, with no IP address.

  • Interface sf2 will receive copies of port 80 TCP traffic sent to it by Pf. It bears the arbitrary address 1.1.1.79. Access to this interface by other hosts should be denied by firewall rules, not shown here.

  • Interface sf3 will receive copies of all non-port 80 TCP traffic, as well as UDP and ICMP, sent to it by Pf. (For the purposes of this simple deployment, we are not considering other IP protocols.) It bears the arbitrary address 2.2.2.1. Access to this interface by other hosts should be denied by firewall rules, not shown here.


Now consider the two sensors.

  • Sensor 1 uses its interface sf2 to capture traffic sent to it from the Pf bridge. It bears the arbitrary IP address 1.1.1.80. Access to this interface by other hosts should be denied by firewall rules, not shown here.

  • Sensor 2 uses its interface sf3 to capture traffic sent to it from the Pf bridge. It bears the arbitrary address 2.2.2.2. Access to this interface by other hosts should be denied by firewall rules, not shown here.


One would have hoped the Pf dup-to function could send traffic to directly connected interfaces without the involement of any IP addresses. Unfortunately, my testing revealed that assigning IP addresses to interfaces on both sides of the link is required. I used OpenBSD 3.7, but future versions may not have this requirement.

With this background, we can begin to understand the /etc/pf.conf.dup-to file.

  • The first set of declarations define macros for the interfaces and IP addresses used in the scenario.

  • The first set of pass commands tell Pf to send port 80 TCP traffic to 1.1.1.80, which is the packet capture interface on sensor 1. Two rules are needed; one for inbound traffic, and one for outbound traffic.

  • The second set of pass commands tell Pf to send all non-port 80 TCP traffic to 2.2.2.2, which is the packet capture interface on sensor 2. Again two rules are needed.

  • The third and fourth set of pass commands send UDP and ICMP traffic to 2.2.2.2 as well.


Before testing this deployment, ensure Pf is running, and that all interfaces are appropriately configured and enabled. To test our distributed collection system, we retrieve the Google home page using Wget.

$ wget http://www.google.com/index.html
--10:19:50-- http://www.google.com/index.html
=> `index.html'
Resolving www.google.com... 64.233.187.99, 64.233.187.104
Connecting to www.google.com[64.233.187.99]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

[ <=> ] 1,983 --.--K/s

10:19:51 (8.96 MB/s) - `index.html' saved [1983]

Here is what sensor 1 sees on its interface sf2:

10:18:58.122543 IP 172.17.17.2.65480 > 64.233.187.99.80:
S 101608113:101608113(0) win 32768

10:18:58.151066 IP 64.233.187.99.80 > 172.17.17.2.65480:
S 2859013924:2859013924(0) ack 101608114 win 8190
10:18:58.151545 IP 172.17.17.2.65480 > 64.233.187.99.80:
. ack 1 win 33580
10:18:58.153027 IP 172.17.17.2.65480 > 64.233.187.99.80:
P 1:112(111) ack 1 win 33580
10:18:58.184169 IP 64.233.187.99.80 > 172.17.17.2.65480:
. ack 112 win 8079
10:18:58.185384 IP 64.233.187.99.80 > 172.17.17.2.65480:
. ack 112 win 5720
10:18:58.189840 IP 64.233.187.99.80 > 172.17.17.2.65480:
. 1:1431(1430) ack 112 win 5720
10:18:58.190344 IP 64.233.187.99.80 > 172.17.17.2.65480:
P 1431:2277(846) ack 112 win 5720
10:18:58.190483 IP 64.233.187.99.80 > 172.17.17.2.65480:
F 2277:2277(0) ack 112 win 5720
10:18:58.192706 IP 172.17.17.2.65480 > 64.233.187.99.80:
. ack 2277 win 32734
10:18:58.192958 IP 172.17.17.2.65480 > 64.233.187.99.80:
. ack 2278 win 32734
10:18:58.204719 IP 172.17.17.2.65480 > 64.233.187.99.80:
F 112:112(0) ack 2278 win 33580
10:18:58.232685 IP 64.233.187.99.80 > 172.17.17.2.65480:
. ack 113 win 5720

Here is what sensor 2 sees on its interface sf3.

10:18:58.089226 IP 172.17.17.2.65364 > 192.168.2.7.53:
64302+ A? www.google.com. (32)
10:18:58.113853 IP 192.168.2.7.53 > 172.17.17.2.65364:
64302 3/13/13 CNAME www.l.google.com.,
A 64.233.187.99, A 64.233.187.104 (503)

As we planned, sensor 1 only saw port 80 TCP traffic, while sensor 2 saw everything else. In this case, “everything else” meant a DNS request for www.google.com. So why build a distributed collection system? This section presented a very simple deployment scenario, but you can begin to imagine the possibilities. Network security monitoring (NSM) advocates collecting alert, full content, session, and statistical data. That can be a great amount of strain on a single sensor, even if only full content data is collected.

By building a distributed collection system, NSM data can be forwarded to independent systems built specially for the tasks at hand. In our example, we offloaded heavy Web surfing activity to one sensor, and sent all other traffic to a separate sensor.

We also split the traffic passing function from the traffic recording function. The Pf bridge in Figure 4-18 is not performing any disk input/output (IO) operations. The kernel is handling packet forwarding, which can be done very quickly. The independent sensors are accepting traffic split out by the Pf bridge. The sensors can be built to perform fast disk IO. This sort of sensor load-balancing provides a way to apply additional hardware to difficult packet collection environments.

Comments

Anonymous said…
What types of bandwidth are you able to push through this setup? I suppose this is a function of what other things this box is doing and what types of hardware (PCI bus, NICs, etc) you are using, but it is definitely an interesting consideration for people considering network aggregation devices (netoptics, shomiti, finisar, etc). While a setup like this doesn't specifically address all the problems that a traditional tap/aggregation device does, but it is a refreshing way of looking at things.
Hi Jon,

I have not tested this setup under load, mainly because most of my hardware is five years older than what most people run in production. (See BejNet.) I imagine with Pf running in the kernel on OpenBSD, it would be fairly robust.
Anonymous said…
Kind of dangerous to put an inline PC type device on the network. If that box fails, the network goes down.

Don't get me wrong, I like the idea, but it won't get deployed like that in most places where you have a lot of bandwidth to necessitate using an additional IDS.

Another approach would be to use an passive aggregator tap to monitor the connection, have the OpenBSD box monitor the aggregated interface from the tap and then split it up according to your example.

(You can use a regular tap too, you'll just need to bridge the interfaces...which is MUCH simpler to configure on OpenBSD vs FreeBSD's netgraph stuff)
Hi Joe,

You could make that argument for anyone who deploys a single inline appliance. Still, I'll be sure to include the tap-to-splitter idea in the book.

Thanks!
Anonymous said…
I was very excited to see a link to this page in the SecurityFocus IDS mailing list. Sharing data from a tap has been a sore spot in our NSM architecture, and just recently had turned into a critical need. Since I have more experience with Linux, I naturally went looking for a Linux-based implementation of the solution you implemented on BSD. I found it in the Linux bridging utilities and ebtables (info here: http://linux-net.osdl.org/index.php/Bridge and here http://ebtables.sourceforge.net/ ). Once all the input and output interfaces on the Linux box are bridged, then ebtables is used to forward all packets arriving from the tap* to the desired output interfaces. Since ebtables works at layer 2, if you want to filter on layer 3 parameters (address, ports) you would need to pass the packet to iptables. Since we are not operating in-line, this is, in fact, an implementation of the solution Joe discusses in the previous message.

*Note: The bridging on the Linux box is not used for aggregating the tap output. Even though we are not operating in-line, we still need bridging to enable forwarding of ethernet packets. Currently we aggregate tap output on a switch, then use a mirror port to send the data to the IDS. We're now just redirecting that mirror port output to the Linux box, where the packets are copied to multiple interfaces, one if which is connected to the IDS. In the near future we will replace the current tap with an aggregating tap, eliminating the switch from the setup and greatly simplifying things.
Anonymous said…
In terms of the comment about a single inline box. If you are using OBSD for PF you can use CARP and pfsyncto create failover/load balancing. So redundancy is not an issue with hardware failures etc.

Great article too!
Jascha,

Will CARP and pfsync work in this scenario on a bridging OpenBSD system?

Richard
Anonymous said…
This comment has been removed by a blog administrator.
Hi,

I have this setup :

A sensor connected to a TAP via two bridged ethernet interfaces (em0 and em1 -> bridge0). Here is my rc.conf :

ifconfig_em0="promisc -arp up"
ifconfig_em1="promisc -arp up"

cloned_interfaces="bridge0"
ifconfig_bridge0="addm em0 addm em1 monitor up"

I can tcpdump traffic going through the Network TAP on bridge0. Next step is to dump DNS traffic to another loopback interface lo1. Here is my pf.conf.local :

fede_if="bridge0"
l53_if="lo1"
l53_ad="127.0.1.1"

pass in log on $fede_if dup-to ($l53_if $l53_ad) proto {tcp,udp} from any to any
port 53 no state

This pf rule doesn't match any packet. if I use the follwing tcpdump filter 'dst port 53' on bridge0, it displays packets. Does packets reaching bridge0 are not considered as "in going" packets by PF ?

Thanks,

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics