Tuesday, September 18, 2018

Firewalls and the Need for Speed

I was looking for resources on campus network design and found these slides (pdf) from a 2011 Network Startup Resource Center presentation. These two caught my attention:



This bothered me, so I Tweeted about it.

This started some discussion, and prompted me to see what NSRC suggests for architecture these days. You can find the latest, from April 2018, here. Here is the bottom line for their suggested architecture:






What do you think of this architecture?

My Tweet has attracted some attention from the high speed network researcher community, some of whom assume I must be a junior security apprentice who equates "firewall" with "security." Long-time blog readers will laugh at that, like I did. So what was my problem with the original recommendation, and what problems do I have (if any) with the 2018 version?

First, let's be clear that I have always differentiated between visibility and control. A firewall is a poor visibility tool, but it is a control tool. It controls inbound or outbound activity according to its ability to perform in-line traffic inspection. This inline inspection comes at a cost, which is the major concern of those responding to my Tweet.

Notice how the presentation author thinks about firewalls. In the slides above, from the 2018 version, he says "firewalls don't protect users from getting viruses" because "clicked links while browsing" and "email attachments" are "both encrypted and firewalls won't help." Therefore, "since firewalls don't really protect users from viruses, let's focus on protecting critical server assets," because "some campuses can't develop the political backing to remove firewalls for the majority of the campus."

The author is arguing that firewalls are an inbound control mechanism, and they are ill-suited for the most prevalent threat vectors for users, in his opinion: "viruses," delivered via email attachment, or "clicked links."

Mail administrators can protect users from many malicious attachments. Desktop anti-virus can protect users from many malicious downloads delivered via "clicked links." If that is your worldview, of course firewalls are not important.

His argument for firewalls protecting servers is, implicitly, that servers may offer services that should not be exposed to the Internet. Rather than disabling those services, or limiting access via identity or local address restrictions, he says a firewall can provide that inbound control.

These arguments completely miss the point that firewalls are, in my opinion, more effective as an outbound control mechanism. For example, a firewall helps restrict adversary access to his victims when they reach outbound to establish post-exploitation command and control. This relies on the firewall identifying the attempted C2 as being malicious. To the extent intruders encrypt their C2 (and sites fail to inspect it) or use covert mechanisms (e.g., C2 over Twitter), firewalls will be less effective.

The previous argument assumes admins rely on the firewall to identify and block malicious outbound activity. Admins might alternatively identify the activity themselves, and direct the firewall to block outbound activity from designated compromised assets or to designated adversary infrastructure.

As some Twitter responders said, it's possible to do some or all of this without using a stateful firewall. I'm aware of the cool tricks one can play with routing to control traffic. Ken Meyers and I wrote about some of these approaches in 2005 in my book Extrusion Detection. See chapter 5, "Layer 3 Network Access Control."

Implementing these non-firewall-based security choices requries a high degree of diligence, which requires visibility. I did not see this emphasized in the NSRC presentation. For example:


These are fine goals, but I don't equate "manageability" with visibility or security. I don't think "problems and viruses" captures the magnitude of the threat to research networks.

The core of the reaction to my original Tweet is that I don't appreciate the need for speed in research networks. I understand that. However, I can't understand the requirement for "full bandwidth, un-filtered access to the Internet." That is a recipe for disaster.

On the other hand, if you define partner specific networks, and allow essentially site-to-site connectivity with exquisite network security monitoring methods and operations, then I do not have a problem with eliminating firewalls from the architecture. I do have a problem with unrestricted access to adversary infrastructure.

I understand that security doesn't exist to serve itself. Security exists to enable an organizational mission. Security must be a partner in network architecture design. It would be better to emphasize enhance monitoring for the networks discussed above, and think carefully about enabling speed without restrictions. The NSRC resources on the science DMZ merit consideration in this case.

Tuesday, September 11, 2018

Twenty Years of Network Security Monitoring: From the AFCERT to Corelight

I am really fired up to join Corelight. I’ve had to keep my involvement with the team a secret since officially starting on July 20th. Why was I so excited about this company? Let me step backwards to help explain my present situation, and forecast the future.

Twenty years ago this month I joined the Air Force Computer Emergency Response Team (AFCERT) at then-Kelly Air Force Base, located in hot but lovely San Antonio, Texas. I was a brand new captain who thought he knew about computers and hacking based on experiences from my teenage years and more recent information operations and traditional intelligence work within the Air Intelligence Agency. I was desperate to join any part of the then-five-year-old Information Warfare Center (AFIWC) because I sensed it was the most exciting unit on “Security Hill.”

I had misjudged my presumed level of “hacking” knowledge, but I was not mistaken about the exciting life of an AFCERT intrusion detector! I quickly learned the tenets of network security monitoring, enabled by the custom software watching and logging network traffic at every Air Force base. I soon heard there were three organizations that intruders knew to be wary of in the late 1990s: the Fort, i.e. the National Security Agency; the Air Force, thanks to our Automated Security Incident Measurement (ASIM) operation; and the University of California, Berkeley, because of a professor named Vern Paxson and his Bro network security monitoring software.

When I wrote my first book in 2003-2004, The Tao of Network Security Monitoring, I enlisted the help of Christopher Jay Manders to write about Bro 0.8. Bro had the reputation of being very powerful but difficult to stand up. In 2007 I decided to try installing Bro myself, thanks to the introduction of the “brolite” scripts shipped with Bro 1.2.1. That made Bro easier to use, but I didn’t do much analysis with it until I attended the 2009 Bro hands-on workshop. There I met Vern, Robin Sommer, Seth Hall, Christian Kreibich, and other Bro users and developers. I was lost most of the class, saved only by my knowledge of standard Unix command line tools like sed, awk, and grep! I was able to integrate Bro traffic analysis and logs into my TCP/IP Weapons School 2.0 class, and subsequent versions, which I taught mainly to Black Hat students. By the time I wrote my last book, The Practice of Network Security Monitoring, in 2013, I was heavily relying on Bro logs to demonstrate many sorts of network activity, thanks to the high-fidelity nature of Bro data.

In July of this year, Seth Hall emailed to ask if I might be interested in keynoting the upcoming Bro users conference in Washington, D.C., on October 10-12. I was in a bad mood due to being unhappy with the job I had at that time, and I told him I was useless as a keynote speaker. I followed up with another message shortly after, explained my depressed mindset, and asked how he liked working at Corelight. That led to interviews with the Corelight team and a job offer. The opportunity to work with people who really understood the need for network security monitoring, and were writing the world’s most powerful software to generate NSM data, was so appealing! Now that I’m on the team, I can share how I view Corelight’s contribution to the security challenges we face.

For me, Corelight solves the problems I encountered all those years ago when I first looked at Bro. The Corelight embodiment of Bro is ready to go when you deploy it. It’s developed and maintained by the people who write the code. Furthermore, Bro is front and center, not buried behind someone else’s logo. Why buy this amazing capability from another company when you can work with those who actually conceptualize, develop, and publish the code?

It’s also not just Bro, but it’s Bro at ridiculous speeds, ingesting and making sense of complex network traffic. We regularly encounter open source Bro users who spend weeks or months struggling to get their open source deployments to run at the speeds they need, typically in the tens or hundreds of Gbps. Corelight’s offering is optimized at the hardware level to deliver the highest performance, and our team works with customers who want to push Bro to the even greater levels. 

Finally, working at Corelight gives me the chance to take NSM in many exciting new directions. For years we NSM practitioners have worried about challenges to network-centric approaches, such as encryption, cloud environments, and alert fatigue. At Corelight we are working on answers for all of these, beyond the usual approaches — SSL termination, cloud gateways, and SIEM/SOAR solutions. We will have more to say about this in the future, I’m happy to say!

What challenges do you hope Corelight can solve? Leave a comment or let me know via Twitter to @corelight_inc or @taosecurity.