Live from BSDCan Day Two
Day two of the first ever BSDCan is over. This concludes the conference, which we believe was a great success. Dan Langille reported over 175 attendees and is making plans for a second conference next year.
I started the day with Michael Richardson discussing libpcap 1.0. Michael described how the current libpcap file format, major version 2 minor version 4, will eventually become major version 3. The current format presents a header (pcap_file_header) for every trace file as well as per-packet headers (currently pcap_pkthdr), making it difficult to concatenate two separate trace files. The proposed new version eliminates this header and uses more per-packet headers to facilitate mixing packets from various sources into a single trace file. Compare pcap.h and pcap1.h to get a sense of what he means.
Besides the new header Michael discussed, he also presented a two year old alternative format with its own issues. I also learned to pay attention to the savefile.c file, which describes linktypes.
Michael described his work with netdissect, a library of protocol printers for Tcpdump. Michael mentions it in posts to tcpdump-workers here and here. This is part of an effort to modularize Tcpdump. It will eventually provide options for sending Tcpdump output to places other than the screen. In the future, Tcpdump could be separated into a privilege separated version (different from the existing OpenBSD implementation) where one program uses the kernel and BPF to get traffic, which is passed to a lower-privilege dissector program.
Next I attended FreeBSD Core Team member Wes Peters' talk. He discussed how his company St. Bernard Software builds dependable appliances using FreeBSD. I thought his talk was interesting, but his decision to post the text of his presentation on the screen was not very clever. He said that 14 people have been killed as a result of PowerPoint-like presentations, a claim alluded to elsewhere and discussed in a recent issue of Software Development magazine. That doesn't mean it makes sense to post pages of text in paragraph form, and then stare at them looking for the point one needs to make while standing in front of a crowd. The key to using PowerPoint or slides in general is to present key points, and not get bogged down in descending levels of detail where critical issues are buried from view.
One of the practical points I took from Wes' talk was advice to avoid flash disks in favor of hard drives. If flash must be used, Wes had praise for SanDisk and Lexar Media. He also claimed Samba 2.2.x is poor when transferring large (> 1 GB) files and 3.x isn't much better.
Wes said his appliances keep three images on disk, one as primary, one as backup, and one as a read-only failsafe/panic boot partition. His solution makes extensive uses of vnodes and stores configuration files in a PostgreSQL database. The appliance operates in degraded mode when something fails, offering enough information to perform troubleshooting.
His appliance does IP configuration by listening for a packet sent by an administrative console application using a source IP owned by St. Bernard. Connecting to the device's serial port launches a configuration wizard, not a shell. Only through working with St. Bernard tech support could one access a console of any kind.
Their product makes three sorts of updates: (1) subscriptions update application data changes; (2) patches update application bugs and security flaws; and (3) releases upgrade the entire system image, including the kernel. To retrieve updates, the appliances poll servers owned by St. Bernard on a daily basis.
Ryan McBride from the OpenBSD team gave the next talk I attended. He discussed Pf, the OpenBSD packet filter. He gave a nifty demo of firewall failover with Pfsync and CARP. A cool aspect of CARP (Common Address Redundancy Protocol) is its ability to work on devices other than firewalls. During the talk, Theo said a university using CARP to offer Samba via four servers. Ryan and Theo mentioned spamd, a sort of La Brea Tarpit to catch spammers.
Speaking of Theo, he gave the last talk I attended. It was an updated version of the exploit mitigation presented he gave at CanSecWest last month. His comment about Microsoft's adoption of a ProPolice-like system in Windows is flawed. Instead of setting the canary used to protect the stack at run-time, Microsoft computed the canary at compile-time. This means every copy of the same application has the same canary, making life easier for intruders. In Theo's words: "They completely missed the point!" Theo also commented that it's a bad idea to have a single 'nobody' user for multiple jailed processes. It's much better to give each jailed process its own unprivileged user, like _tcpdump, _apache, and so forth. That way, an intruder can't use ptrace between jails running under the same user ID and string together the means to escape.
Overall, I thought the conference was excellent. I intend to return next year. I paid for the affair myself and I feel I got my money's worth. I got to meet some really interesting people, including BSD Hacks author Dru Lavigne. Too bad Michael Lucas was absent -- I'm waiting for his book on NetBSD and Cisco Routers for the Desperate.
Over the course of the weekend, several people spoke to me about Sguil and monitoring in general. A few had questions about how to conduct monitoring when asymmetric routing is used. Asymmetric routing typically involves traffic being sent over one interface and route and returned over a different interface and route. There are two ways this seems to be used. First, at the client side, one might have a downlink served by a satellite feed with a phone line used for an uplink. Not only are such routes asymmetric, the latency and bandwidth is asymmetric too. This causes all sorts of problems for TCP, which generally assumes similar link performance for inbound and outbound traffic. A 1997 paper and slides explain the problems with such setups. Another paper can be found here.
Second, at the server side, one might have a server connected to two or more links for redundancy and performance issues. While one of the links may be a primary and hence have better performance, links of equal capability are often used.
For monitoring issues, administrators are more concerned with the second scenario at the server side. Some vendors, like Top Layer, sell products to bring the traffic together for monitoring purposes. Some also advocate per-flow rather than per-packet balancing on routing gear, if possible. I think there must be an open source solution to this, perhaps involving bridging promiscious interfaces and creating a virtual interface to monitor.
These issues were debated on snort-users and NANOG in March. Linux Journal also debated techniques to handle this problem in April.
I started the day with Michael Richardson discussing libpcap 1.0. Michael described how the current libpcap file format, major version 2 minor version 4, will eventually become major version 3. The current format presents a header (pcap_file_header) for every trace file as well as per-packet headers (currently pcap_pkthdr), making it difficult to concatenate two separate trace files. The proposed new version eliminates this header and uses more per-packet headers to facilitate mixing packets from various sources into a single trace file. Compare pcap.h and pcap1.h to get a sense of what he means.
Besides the new header Michael discussed, he also presented a two year old alternative format with its own issues. I also learned to pay attention to the savefile.c file, which describes linktypes.
Michael described his work with netdissect, a library of protocol printers for Tcpdump. Michael mentions it in posts to tcpdump-workers here and here. This is part of an effort to modularize Tcpdump. It will eventually provide options for sending Tcpdump output to places other than the screen. In the future, Tcpdump could be separated into a privilege separated version (different from the existing OpenBSD implementation) where one program uses the kernel and BPF to get traffic, which is passed to a lower-privilege dissector program.
Next I attended FreeBSD Core Team member Wes Peters' talk. He discussed how his company St. Bernard Software builds dependable appliances using FreeBSD. I thought his talk was interesting, but his decision to post the text of his presentation on the screen was not very clever. He said that 14 people have been killed as a result of PowerPoint-like presentations, a claim alluded to elsewhere and discussed in a recent issue of Software Development magazine. That doesn't mean it makes sense to post pages of text in paragraph form, and then stare at them looking for the point one needs to make while standing in front of a crowd. The key to using PowerPoint or slides in general is to present key points, and not get bogged down in descending levels of detail where critical issues are buried from view.
One of the practical points I took from Wes' talk was advice to avoid flash disks in favor of hard drives. If flash must be used, Wes had praise for SanDisk and Lexar Media. He also claimed Samba 2.2.x is poor when transferring large (> 1 GB) files and 3.x isn't much better.
Wes said his appliances keep three images on disk, one as primary, one as backup, and one as a read-only failsafe/panic boot partition. His solution makes extensive uses of vnodes and stores configuration files in a PostgreSQL database. The appliance operates in degraded mode when something fails, offering enough information to perform troubleshooting.
His appliance does IP configuration by listening for a packet sent by an administrative console application using a source IP owned by St. Bernard. Connecting to the device's serial port launches a configuration wizard, not a shell. Only through working with St. Bernard tech support could one access a console of any kind.
Their product makes three sorts of updates: (1) subscriptions update application data changes; (2) patches update application bugs and security flaws; and (3) releases upgrade the entire system image, including the kernel. To retrieve updates, the appliances poll servers owned by St. Bernard on a daily basis.
Ryan McBride from the OpenBSD team gave the next talk I attended. He discussed Pf, the OpenBSD packet filter. He gave a nifty demo of firewall failover with Pfsync and CARP. A cool aspect of CARP (Common Address Redundancy Protocol) is its ability to work on devices other than firewalls. During the talk, Theo said a university using CARP to offer Samba via four servers. Ryan and Theo mentioned spamd, a sort of La Brea Tarpit to catch spammers.
Speaking of Theo, he gave the last talk I attended. It was an updated version of the exploit mitigation presented he gave at CanSecWest last month. His comment about Microsoft's adoption of a ProPolice-like system in Windows is flawed. Instead of setting the canary used to protect the stack at run-time, Microsoft computed the canary at compile-time. This means every copy of the same application has the same canary, making life easier for intruders. In Theo's words: "They completely missed the point!" Theo also commented that it's a bad idea to have a single 'nobody' user for multiple jailed processes. It's much better to give each jailed process its own unprivileged user, like _tcpdump, _apache, and so forth. That way, an intruder can't use ptrace between jails running under the same user ID and string together the means to escape.
Overall, I thought the conference was excellent. I intend to return next year. I paid for the affair myself and I feel I got my money's worth. I got to meet some really interesting people, including BSD Hacks author Dru Lavigne. Too bad Michael Lucas was absent -- I'm waiting for his book on NetBSD and Cisco Routers for the Desperate.
Over the course of the weekend, several people spoke to me about Sguil and monitoring in general. A few had questions about how to conduct monitoring when asymmetric routing is used. Asymmetric routing typically involves traffic being sent over one interface and route and returned over a different interface and route. There are two ways this seems to be used. First, at the client side, one might have a downlink served by a satellite feed with a phone line used for an uplink. Not only are such routes asymmetric, the latency and bandwidth is asymmetric too. This causes all sorts of problems for TCP, which generally assumes similar link performance for inbound and outbound traffic. A 1997 paper and slides explain the problems with such setups. Another paper can be found here.
Second, at the server side, one might have a server connected to two or more links for redundancy and performance issues. While one of the links may be a primary and hence have better performance, links of equal capability are often used.
For monitoring issues, administrators are more concerned with the second scenario at the server side. Some vendors, like Top Layer, sell products to bring the traffic together for monitoring purposes. Some also advocate per-flow rather than per-packet balancing on routing gear, if possible. I think there must be an open source solution to this, perhaps involving bridging promiscious interfaces and creating a virtual interface to monitor.
These issues were debated on snort-users and NANOG in March. Linux Journal also debated techniques to handle this problem in April.
Comments