Saturday, May 23, 2009

Response for Daily Dave

Recently on the Daily Dave mailing list, Dave Aitel posted the following:

...The other thing that keeps coming up is memory forensics. You can do a lot with it today to find trojan .sys's that hackers are using - but it has a low ceiling I think. Most rootkits "hide processes", or "hide sockets". But it's an insane thing to do in the kernel. If you're in the kernel, why do you need a process at all? For the GUI? What are we writing here, MFC trojans? There's not a ton of entropy in the kernel, but there's enough that the next generation of rootkits is going to be able to avoid memory forensics as a problem they even have to think about. The gradient here is against memory forensics tools - they have to do a ton of work to counteract every tiny thing a rootkit writer does.

With exploits it's similar. Conducting memory forensics on userspace in order to find traces of CANVAS shellcode is a losing game in even the medium run. Anything thorough enough to catch shellcode is going to have too many false positives to be useful. Doesn't mean there isn't work to be done here, but it's not a game changer.


Since I'm not 31337 to get my post through Dave's moderation, I'll just publish my reply here:

Dave and everyone,

I'm not the guy to defend memory forensics at the level of an Aaron Walters, but I can talk about the general approach. Dave, I think you're applying the same tunnel vision to this issue that you apply to so-called intrusion detection systems. (We talked about this a few years ago, maybe at lunch at Black Hat?)

Yes, you can get your exploit (and probably your C2) by most detection mechanisms (which means you can bypass the "prevention" mechanism too). However, are you going to be able to hide your presence on the system and network -- perfectly, continuously, perpetually? (Or at least as long as it takes to accomplish your mission?) The answer is no, and this is how professional defenders deal with this problem on operational networks.

Memory forensics is the same. At some point the intruder is likely to take some action that reveals his presence. If the proper instrumentation and retention systems are deployed, once you know what to look for you can find the intruder. I call this retrospective security analysis, and it's the only approach that's ever worked against the most advanced threats, analog or digital. [1] The better your visibility, threat intelligence, and security staff resources,
the smaller the exposure window (compromise -> adversary mission completion). Keeping the window small is the best we can do; keeping it closed is impossible against advanced intruders.

Convincing developers and asset owners to support visibility remains a problem though.

Sincerely,

Richard

[1] http://taosecurity.blogspot.com/2009/02/black-hat-briefings-justify-supporting.html


I encounter Dave's attitude fairly often. What do you think?


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

7 comments:

Keydet89 said...

I'm 100% with Richard on this...at some point, the intruder has to do something, and their presence will be revealed to someone with the right visibility and right skills.

This is well-known in the military. At some point, that sniper or those Marines in an LP/OP position (listening/observation post) are going to succumb to human needs, or they're going to have to send data back to higher headquarters...if they don't, what's the point? The most stealthy sniper needs to move or maybe even take a shot.

The point is that something, somewhere will be revealed. I don't run across many "professional defenders" and more often than not I deal with customers lacking even the most basic visibility, or rudimentary training...but the fact is that there is something, somewhere...muddy bootprints on the carpet, or something more subtle, like a bent twig or some turned-up stones...

Matthew said...

I'm going with Dave on this one. Having used CANVAS on numerous real-world pen tests I've never had it detected by IDS.

Your argument appears to make sense, but it's an untenable position. Given enough time, the probability of detection goes to 1. However, by that time it's too late. Even in a short week long pen test, an experience tester has already cracked passwords, created users, installed keyloggers, gotten your most critical data, and has 2-3 boxes sitting idle so that if you detect a noisier box I still have some place to go.

At that point detection doesn't really matter. You can't get rid of the attacker as your only real advantage is physical access to machines -- which is dubious in many global enterprise environments. The attacker already has the most valuable data and at that point is probably just letting the assets idle.

Richard Bejtlich said...

Matthew,

I wrote this post

http://taosecurity.blogspot.com/2009/05/defenders-dilemma-and-intruders-dilemma.html

for people who need more details.

If you can accomplish your objective in the amount of time you cite, your objective is far narrower than that of the sorts of intruders who really scare me, or the networks involved are too small to concern me.

Rick said...

Memory forensics is also about baselining your system so you can detect when the system is operating outside the norms.

We do a lot of things in security that have a low probability of detecting an attack, but the fact is that the more sources you have, the more likely you are to detect the attack.

Michael Cloppert said...

Richard,

I agree with your conclusions, and explanation. I have a somewhat different lens that first strikes me when I hear things like this, though.

Our position as analysts is part of a larger function which is risk management, not risk elimination. If we base our efforts around the latter, we will never be effective at the former. By forcing the adversaries into an arms race, as Dave insinuates, we force sophistication on their part and reduce the threat space that is effective at compromising our security. This means we can apply more expensive and sophisticated techniques to the remaining problem which would not be possible on a larger scale. Is this situation "winnable" for the defender? Probably not, but then again neither is vulnerability management for the very same reasons. Does that mean we should ease up efforts to reduce the problem? Of course not.

National security makes for a good, if incomplete, analogue. Can we guarantee the security of our country from foreign adversaries? Of course not. We must sometimes engage in arms races to mitigate a possible attack when diplomacy is ineffective - the cold war is a good example. And just like in national security, the best strategic solution to the problem is in policy - conflict is often a symptom of ideology that can in many cases be addressed through diplomacy and legal frameworks. This has not yet been recognized by policymakers or lawmakers to also apply to information security, thus we are left in a difficult arms race to mitigate conditions in which we are no longer secure.

I feel proponents of this somewhat defeatist attitude reveal a lack of appreciation for that important distinction between risk management and risk elimination which is a guiding principle behind our decisions every day.

-Mike

Matthew said...

Richard,

I usually work with fortune 100 companies and/or large government agencies. If they aren't sufficiently large to concern you then I'm a bit baffled. Likewise, in that time frame I've generally established a strong foothold to ensure continued access to the network, and have access to critical system (email, employee data, customer databases, PDC).

I like to think I'm a fair pen tester, but similar results are typical of an assessment done by most quality firms.

Richard Bejtlich said...

Matthew, I don't think you are considering the scope of the problem I'm addressing. I'm talking about adversaries who employ dedicated, specialized teams that operate well outside normal pen testing boundaries and survive despite months of intense removal activity.