Marcus Ranum on Proxies, Deep Packet Inspection

I asked security guru Marcus Ranum if he would mind commenting on using proxies as security devices. I will publish his thoughts in my new book Extrusion Detection, but he's allowed me to print those comments here and now. I find them very interesting.

"The original idea behind proxies was to have something that interposed itself between the networks and acted as a single place to 'get it right' when it came to security. FTP-gw was the first proxy I wrote, so I dutifully started reading the RFC for FTP and gave up pretty quickly in horror. Reading the RFC, there was all kinds of kruft in there that I didn't want outsiders being able to run against my FTP server. So the first generation proxies were not simply an intermediary that did a 'sanity check' against application protocols, they deliberately limited the application protocols to a command subset that I felt was the minimum that could be done securely and still have the protocol work.

For example, while writing FTP-gw I realized that the PORT command would cause a blind connection from TCP port 20 to anyplace, over which data would be transferred. Port 20 was in the privileged range and I was able to predict an 'rshd' problem so I quietly emailed the guys at Berkeley and got them to put a check in ruserok() to forestall the attack. I also added code to make sure that the client couldn't ask the server to send data to any host other than itself, forestalling FTP PORT scanning. So I 'invented' 'FTP bounce' attacks in 1990 by preventing them with my proxy.

There are several other examples of where, in implementing proxies, I was horrified to see gaping holes in commonly-used application protocols, and was able to get them fixed before they were used against innocent victims. Since the proxies implemented the bare minimum command set to allow an application protocol to work, a lot of attacks simply failed against the proxy because it only knew how to do a subset of the protocol. When the hackers started searching for vulnerabilities to exploit in the internet applications suite, they often found holes in seldom-used commands or in poorly tested features of the RFC-compliant servers. But often, when they tried their tricks against the proxy all they'd get back was: 'command not recognized'.

The proxy gave a single, controllable, point where these fixes could be installed.

The problem with proxies is that they took time and effort and a security-conscious analyst to write them and design them. And sometimes the proxy ran up against a design flaw in an application
protocol where it couldn't really make any improvement to the protocol. The first reaction of the proxy firewall vendors was to tell their customers, 'well, protocol XYZ is so badly broken that
you just can't rely on it across an untrusted network.' This was, actually, the correct way (from a security standpoint) to solve the problem, but it didn't work.

Around the time when the Internet was becoming a new media phenomenon, a bunch of firewalls came on the market that were basically smart packet filters that did a little bit of layer-7 analysis. These 'stateful firewalls' were very attractive to end users because they didn't even TRY to 'understand' the application protocols going across them, except for the minimum amount necessary to get the data through.

For example, one popular firewall from the early 90's managed the FTP PORT transaction by looking for a packet from the client with the PORT command in it, parsing the PORT number and address out of it, and opening that port in its rulesbase. Fast? Certainly. Transparent to the end user? Totally. Secure? Hardly. But these 'stateful firewalls' sold very well because they offered the promise of high performance, low end-user impact, and enough security that an IT manager could
say they'd tried.

There are a few vendors who have continued to sell proxy firewalls throughout the early evolution of the Internet, but most of the proxy firewalls are long gone. Basically, the customers
didn't want security; they wanted convenience and the appearance of having tried. What's ironic is that a lot of the attacks that are bedeviling networks today would never have gotten through the early proxy firewalls. But, because the end user community chose convenience over security, they wound up adopting a philosophy of preferring to let things go through, then violently slamming the barn door after the horse had exited.

Proxies keep cropping up over and over, because they are fundamentally a sound idea. Every so often someone re-invents the proxy firewall - as a border spam blocker, or a 'web firewall' or an 'application firewall' or 'database gateway' - etc. And these technologies work wonderfully. Why? Because they're a single point where a security-conscious programmer can assess the threat represented by an application protocol, and can put error detection, attack detection, and validity checking in place.

The industry will continue to veer back and forth between favoring connectivity and favoring security depending on which fad is in the ascendant. But proxies are going to be with us for the long term, and have been steadfastly keeping networks secure as all the newfangled buzzword technologies ('stateful packet inspection,' 'intrusion prevention,' 'deep packet inspection') come - and go."

This is a report he wrote for a client, but he's sharing it with the world. If you're looking for some cynical digital security artwork to grace your cubicle, check out Marcus' media outlet.


Popular posts from this blog

Five Reasons I Want China Running Its Own Software

Cybersecurity Domains Mind Map

A Brief History of the Internet in Northern Virginia