Monday, November 26, 2012

Why Collect Full Content Data?

I recently received the following via email:

I am writing a SANS Gold paper on a custom full packet capture system using Linux and tcpdump. It is for the GSEC Certification, so my intent is to cover the reasons why to do full packet capture and the basic set up of a system (information that wasn't readily available when setting my system up)...

I am already referencing The Tao of Network Security Monitoring.

These are the questions that I came up with based on questions other peers have asked me...

Here are the questions, followed by my answers. Most of this is covered in my previous books and blog posts, but for the sake of brevity I'll try posting short, stand-alone responses.

  1. As an information security analyst in today's threat landscape why would I want to do full packet capture in my environment? What value does have?

    Full content data or capturing full packets provides the most flexibility and granularity when analyzing network-centric data. Unlike various forms of log data, full content data, if properly collected, is the actual data that was transferred -- not a summarization, or representation, or sample.

  2. Where should I place a full packet capture system on my network - are ingress/egress points sufficient?

    I prioritize collection locations as follows:

    • Collect where you can see the true Internet destination IP address for traffic of interest, and where you can see the true internal source IP address for traffic of interest. This may require deploying two traffic access methods with two sensors; so be it.
    • Collect where you can see traffic to and from your VPN segment. Remember the previous IP address requirements.
    • Collect where you can see traffic to and from business partners or through "third party gateways." You need to acquire the true source IP, but you may not be able to acquire the true destination IP if the business partner prevents collecting behind any NAT or security devices that obscure the true destination IP.
    • Collect where your business units exchange traffic. This is more of a concern for larger companies, but you want to see true source and destination IPs (if possible) of internal traffic as they cross business boundaries.
    • Consider cloud or hosted vendors who enable collection near Infrastructure-as-a-Service platforms used by your company.
  3. What advantages are there to creating a custom server with open source tools (such as a server running Linux and capturing with tcpdump) opposed to buying a commercial solution (like Solera or Niksun)?

    A custom or "open" platform enables analysts to deploy the sorts of tools they need to accomplish their security mission. Closed platforms require the analyst to rely on the information provided by the vendor.

  4. Now that I have full packet data, what kind of analysis goals should I have to address advanced threats and subtle attacks?

    The goal for any network security monitoring operation is to collect and analyze indicators and warnings to detect and respond to intrusions. Your ultimate role is to detect, respond to, and contain adversaries before they accomplish their mission, which may be to steal, alter, or destroy your data.

  5. Any other advice for an analyst just getting started with full packet capture systems and analyzing the data?

    Rarely start with full content data. Don't dump a ton of traffic into Wireshark and start scrolling around. I recommend working with session data (connection logs) and application-specific logs (HTTP, DNS, etc.) to identify sessions of interest, then examine the content if necessary to validate your suspicions.

I could write a lot more on this topic. Stay tuned.

Sunday, November 25, 2012

Spectrum of State Responsibility

"Attribution" for digital attacks and incidents is a hot topic right now. I wanted to point readers to this great paper by Jason Healey at the Atlantic Council titled Beyond Attribution: Seeking National Responsibility in Cyberspace.

ACUS published the report in February, but I'm not hearing anyone using the terms described therein. Probably my favorite aspect of the paper is the chart pictured at left. It offers a taxonomy for describing state involvement in digital attacks, ranging from "state-prohibited" to "state-integrated."

I recommend using the chart and ideas in the paper as a starting point the next time you have a debate over digital attribution.

Saturday, November 24, 2012

Recommended: The Great Courses "Art of War" Class

I recently purchased and listened to an audio course titled The Art of War (TAOW) by Prof Andrew R. Wilson and published by The Great Courses. From the first few minutes I knew this series of six 30 minute lessons was going to be great.

For example, did you know that "Sun Tzu" didn't write "The Art of War?" An anonymous author wrote the book in the 4th century BC, based on Sun Tzu's lessons from his time in the 6th century BC.

Also, "The Art of War" isn't even the name of the book! It's actually "Master Sun's Military Method." Furthermore, the use of the term "Master" is significant as it was a term not usually associated with generals.

I especially like two aspects of the course. First, the lecturer, paraphrasing his own words, didn't choose to simply peruse TAOW looking for trite phrases. He equates that approach with telling a stock broker to "buy low, sell high." Instead, Prof Wilson is more concerned with explaining the context for the book and what the words really mean.

Second, the lecturer extends his discussion beyond the history of China's Warring States Period, the era from which TAOW was born. Prof Wilson applies lessons from the book to military history and business situations. He also applies TAOW to modern Chinese cyber espionage, showing he keeps current with contemporary issues.

Consider buying TAOW as a holiday gift for yourself or your friends!

Friday, November 23, 2012

Commander's Reading List

Last month a squadron commander asked me to recommend books for his commander's reading list. After some reflection I offer the following.

I've divided the list into two sections: technical and nontechnical. My hope for the technical books is to share a little bit of technical insight with the commander's intended audience, while not overwhelming them. The plan for the nontechnical items is to share some perspective on history, policy, and contemporary problems.

The list is in no particular order.

Nontechnical books:

Technical books:

I also recommend any books by Timothy L Thomas.

Update: For the more technically-minded reader, I'm adding the following:

Practical Malware Analysis by Michael Sikorski and Andrew Honig.

Note: The above do not necessarily constitute my "best" or "favorite" books. Please see Best Books for blog posts on that subject.

Thursday, November 22, 2012

Do Devs Care About Java (In)Security?

In September InformationWeek published an article titled Java Still Not Safe, Security Experts Say. From that article by Matthew J. Schwartz:

Is Java 7 currently safe to use?

Last week, Oracle released emergency updates to fix zero-day vulnerabilities in Java 7 and Java 6. But in the case of the Java 7 fix, the new version allows an existing flaw--spotted by security researchers and disclosed to Oracle earlier this year--to be exploited to bypass the Java sandbox. In other words, while fixing some flaws, Oracle opened the door to another one.

In light of that situation, multiple security experts said that businesses should continue to temporarily disable all Java use, whenever possible. "There are still not-yet-addressed, serious security issues that affect the most recent version of Java 7," said Adam Gowdiak, CEO and founder of Poland-based Security Explorations, which initially disclosed the exploited vulnerabilities to Oracle in April. "In that context, disabling Java until proper patches are available seems to be an adequate solution," he said via email.

A month later I read a new article in InformationWeek titled "Oracle's Java Revival," also available as Two Years Later: A Report Card On Oracle's Ownership of Java by Andrew Binstock. The article appeared in the 29 October 2012 issue of InformationWeek, at a time when the security community continued to reel from repeated hammering of Java vulnerabilities.

I expected some mention of Java security woes in the article. About halfway through, with the word "security" not yet in print, I found the following:

In 2011, Oracle did not fare much better. The welcome release of Java 7 was marred by the revelation that it included serious defects that the company knew about.

Ok, maybe there will be some expansion of this idea? Shouldn't a terrible security record be a major factor affecting enterprise use of Java and a reflection on Oracle's handling of Java? Instead I read this:

I'm inclined to agree with James Gosling's revised opinion of Oracle's stewardship, that it's been good for Java...

However, the record is mixed in other areas...

Oracle's ambiguous relationship with the JCP and the OSS communities remain two other weak points.

That's it? Security pros continue to tell enterprise users to disable Java, and the development community is more concerned about features, personalities, and community relations?

I think the Java development community, and especially Oracle, must reevaluate their responsibilities regarding security. Otherwise, they may find themselves coding for a platform that enterprise users will increasingly disable.