Friday, February 15, 2008

Three Capabilities, Three Companies

Recently I've been working to augment my team's detection and response capabilities. I've identified three functions for which I've turned to the commercial software community for assistance. I'd like to highlight three capabilities and three companies which may be able to meet my requirements.

First, I need high-end network forensics. I plan to use my open source tools to do a good deal of collection and some analysis, but in certain cases I need more content-centric capabilities. For example, it would not be easy for me to extract certain types of application layer content (think documents, email attachments, and the like) using some of my tools. I am also not the only person who may need to do this work, so a collaboration- and non-expert-friendly system is needed.

For this I am taking a close look at NetWitness NextGen. I recently bought a copy of Investigator Field Edition. You can think of this product as a network forensics-equivalent of a hard drive forensics product. It's content-centric, not packet-centric like Wireshark. I'm considering using NetWitness Informer to provide Tactical Traffic Assessment services to my businesses by periodically collecting traffic and reporting on what I find.

I can't deploy network sensors everywhere I have a victim host. Therefore, I am going to end up doing a lot of host-centric detection and response. When I suspect a host has been compromised, I want to be able to remotely access that host, collect live response data, and perhaps remotely image the hard drive. I need to know as much about the victim as I can, as quickly as possible.

To meet this requirement I am considering MANDIANT Intelligent Response. I visited their Alexandria, VA offices and got a look at the product. I like the fact that it is built to not only support customers, but also for the MANDIANT consultants supporting DoD and other companies like mine. The consultants feed design ideas to the developers, and the team I met was open to my suggestions. I've also worked with many of the MANDIANT group and I believe they know what is needed to win incident response engagements. MANDIANT's product supports collaboration by allowing multiple investigators to research cases remotely. Their appliance has plenty of storage (3 TB I believe) to house remotely imaged hard drives as well.

The third capability I need to augment involves runtime and binary forensics, also known as memory forensics. Going one step beyond the need to conduct live response, I want to take a snapshot of memory on a victim. I want to identify rogue processes, and then 1) retrieve those processes in binary form for static and dynamic analysis on a test box and/or 2) attach a debugger to the rogue process to learn more about it in the wild. The first case is helpful to determine how malware could be used and how it is like to communicate with the outside world. The second case could be used to observe malware in the wild, possibly even monitoring its communications with its controller -- even if those communications are encrypted on the wire.

To meet this last requirement I met today with HBGary and looked at a beta of their new HBGary Responder product. Over the next few months they are going to add the capability to remotely push their agent to a victim and then pull data from the victim to a concentrator. They plan to add collaboration features (similar to MANDIANT's) so I could manage cases in a distributed manner. Their Responder product provides Active Reversing capability and integrates the pure reverse engineering power of their Inspector tool. I was impressed by Responder's graphing capabilities and the way it showed areas of code that might interest me.

In addition to my technical detection and response needs, I also must provide security metrics for my program. It should be clear after reading such wonderfully titled posts as Control-Compliant vs Field-Assessed Security that I think input metrics are overrated. I need more output metrics to estimate the score of the game, i.e., are we winning, drawing, or losing? I am considering using HBGary Responder to provide one of our metrics in the following manner.

  1. Select a random subset of assets, like employee laptops.

  2. Use HBGary Responder to collect memory images of these assets.

  3. Use the product's binary hashing capabilities to identify processes by comparing them to the Bit9 Knowledgebase and other lists.

  4. Count the number of normal, suspicious, and malicious results over time, per machine. Ideally we want to see fewer suspicious and malicious results, with higher numbers indicating problems.

  5. Beyond the metric, use the conclusions to conduct incident response for those suspicious and malicious results.

I generated a bunch of other metrics last year in Controls Are Not the Solution to Our Problem.
Incidentally, I'm not the only person to think these companies are offering something worthwhile. Today I read Analyze This Malware over at Dark Reading.

Application forensics is a final category of importance for which there are no real commercial tools yet. The canonical example is database forensics. The Oracle leader is (unsurprisingly!) David Litchfield and the SQL server leader is Kevvie Fowler. Both should have books on their respective subjects arriving this year.


Dave said...

>the team I met was open to my

ummmm yeah - I can imagine that conversation.

Alice: Hey Bejtlich's here and he's got some suggestions

Bob: Yeah pull the other one

Alice: Seriously he reckons we should change a couple of things like this ....

Bob: That's interesting, good even .... shit you really mean Richard Bejtlich????

Alice: yea

Bob: And he's going to buy our shit

Alice: he's looking

Bob: And he's making suggestions ... come'on how much do you think he'd CHARGE for advice on a product - just make sure we do whatever he's on about!!

Anonymous said...

"Over the next few months they are going to add the capability to remotely push their agent to a victim and then pull data from the victim to a concentrator."

I don't know about you but I see a major problem with this approach. Once a computer is compromised, it is always compromised. All products have bugs and all products have design weaknesses. This product is going to rely on the compromised computer to send back real information. If you are going against a skilled attacker...aka one who has been watching your every move for several months (or your blog), then he is going to know you are going to use this product. All he has to do is go and find a copy of this software (legit or warez) and then work out a weakness or a way to just plain fake the data. So you use this project but the bad guy already knows its coming and sends back the all clear or some other kind of misinformation leaving you to suspect nothing.

When doing a forensics or a malware analysis of a computer, there is only one option: take it down and test with a clean operating system (I'm thinking knoppix std here). For the more paranoid among us, a clean computer is also good as there is a lot of flashable firmware in modern computers that an attacker can hide in.

Richard Bejtlich said...


First, you have to realize that everything in security is best-effort. I am not going to be able to shut down and image every asset I suspect to be compromised.

Second, your approach will not tell me anything about malware that is memory-persistent only. This is the problem we had 10 years ago with completely memory resident kernel mode rootkits on Solaris. You take the box down and the evidence disappears. Yes, some intruders are willing to lose access to the box rather than deploy a persistence mechanism on the disk.

There is no one best approach. I am adding tools to my kit so I have more options.

Anonymous said...

Indeed, good point!

It's just that I fear this tool may be used against you. I take a hard line on fake data as I have been on the receiving side of it before.

The in memory resident only attacks are common, pdp at seams to be pioneering this with web browsers. These are quite the problem, especially with ssl connections. Burdach does a good job of explaining the problems with memory analysis.

I suppose we need a better method of getting the information out of a compromised system with out allowing the system to know about it or possibly defend it. Thats one of the reasons I have been following your lead with network analysis as playing with the host usually leads to data manipulation and spoilage.

Anonymous said...


Actually, nothing at all like that.

When you have worked late hours on client engagements and have known a guy for about 6 years, a mutual respect tends to develop.

Nothing at all like the fanboi blog-o-world or whatever kids call this nowadays.

- M

Anonymous said...

Hi Richard,
Sound interesting. How do you plan on getting around any privacy and legal concerns, especially using HBGary to capture memory snapshots? Did you add it to your AUPs? Thanks.

Anonymous said...

Whats the advantage of MIR vs AIRS?
I would have thought you would roll your own?

Anonymous said...
This comment has been removed by a blog administrator.