Tuesday, September 18, 2018

Firewalls and the Need for Speed

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:



This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:






What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:


These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merit consideration in this case.

3 comments:

Nick Buraglio said...



This is an interesting and insightful interpretation. However, as you noted, within the research community there are needs and requirements for running at rates that are not feasible for currently available firewalls and UTMs to digest and transit without either failing (thus causing a self-created DoS) or incurring packet loss (also causing a self-generated DoS). While this is an edge case, is is a very valid use case that has existed outside of the typical enterprise architecture that 99% of the computing world identifies with. Along those same lines, there is a real question that should be answered, and that is one of architecture. Where do I need what? Just placing a UTM or stateful firewall at a border is all too often the de facto check box for “now I am secure", and as you stated in regard to running firewall free, "Implementing these non-firewall-based security choices requires a high degree of diligence, which requires visibility.”. While true, this is a statement that I take issue with, because it implies that dropping a magic box at the egress of a network removes that requirement, which it clearly does not. I would instead amend that statement to “successful security requires a high degree of diligence[, which requires visibility].”, because it is a never-ending, perpetual process that never, ever ends, and requires constant re-evluation. Just as importantly, the stereotypical enterprise concept of “I’ll just put a firewall at my border” breaks down very quickly when more complicated designs come into play, specifically architectures that don’t have single or even dual “border” designs. By utilizing visibility and operational diligence one can successfully manage security of a network just as well regardless of the technical execution.
It’s very important to consider more than typical network designs, as they are far more common than most would care to believe, and the more we recognize this the better off we can be from an end to end perspective and more importantly, we can better provider an optimal experience for those that use the resources. Those of us that live on the fringe areas of networking and security are appreciative of your willingness to re-evaluate your statements, as they hold a sizable amount of influence in the community.

Michael said...

I wonder how much of their bandwidth is going to unmonitored malicious traffic?

Povl H. Pedersen said...

Most companies does not think about outbound filtering, and it is usually a very efficient security measure.

If you look at servers, you could use iptables or other local firewall to at least complicate the job of the malware.

And for servers, having a proxy server with whitelisted domain names only (and a firewall/router with whitelisted IPs for other procols) would make them more or less secure, even if an idiot runs some bad stuff on them, or opens a bad link. No contact to stage 2 or C&C server.

Authenticating proxy server for workstations (and handling of needed other protocols as changes) is also vey effective. Most malware can not authenticate agains a proxy. But more stuff starts using IE to get URLs.

But I completely agree, controlling outbound traffic is a very important step if you want security.