More Unpredictable Intruders

Search my blog for the term unpredictable and the majority of the results describe discussions of one of my three security principles, namely

Many intruders are unpredictable.

Two posts by pdp perfectly demonstrate this:

How many of you who are not security researchers even knew that data: or jar: protocols existed? (It's rhetorical, no need to answer in a comment.) Do you think your silver bullet security product knows about it? How about your users or developers?

No, this is another case where the first time you learn of a feature in a product is in a description of how to attack it. This is why the "ahead of the threat" slogan at the left is a pile of garbage. This is another example of Attacker 3.0 exploiting features devised by Developer 2.5 while Security 1.0 is still thinking about how great it is no big worms have hit since 2005. (The specific cases here are worse than Developer 2.5, since jar: and data: protocols are apparently old!)

How do I propose handling issues like this? As always, NSM is helpful. If you've been keeping track of what happens in your enterprise, you can perform some retrospective network analysis (RNA) to see if you've seen this latest attack vector. (RNA is a term which Network Instruments would like to have coined. I like it, even though the concept of recording traffic in this manner dates back to Todd Heberlein's original Network Security Monitor in 1988. The first mention I can quickly find is in this 1997 paper Netanalyzer: A retrospective network analysis tool.)

RNA and, from this point of enlightenment, ongoing network analysis via NSM and, ideally, other forms of instrumentation (logs, etc.) facilitates impact assessment. Who cares if the sky is falling somewhere else, as reported in whatever online news story -- is your sky falling? If yes, what's the damage? How best can we mitigate and recover? These are the sorts of questions one can answer when some data is available, enabling management by fact and avoiding management by belief.

Comments

dre said…
How many of you who are not security researchers even knew that data: or jar: protocols existed? Do you think your silver bullet security product knows about it? How about your users or developers?

What concerns me most is that the Firefox developers and security team did not know about it.

In Web-Centric Short-Term Incident Containment you mention netsed, Whitetrash, and Palo Alto Networks. Trusteer.com and similar technology will start to become available to solve these client-side issues before the big browsers think about re-working their issues (special note: requires starting from scratch).

I recently saw Jay Beale present at Toorcon 9 on Their Hacking our Clients. The presentation is worth a read. In his last slide, he speaks to a possible solution using "client-side IPS/NAC".

Of course, Beale, WhiteTrash, and Trusteer are just scratching the surface of the problem here - adding more "icing" and less "baked-in" security.

The truth, if you can handle it - is that only links can be assured to any level of realistic security for browsing the Internet (from a restricted shell, of course) - even with all of the protections mentioned earlier. You won't see URI abuse issues in code that precludes that functionality.

If we really want to consider Javascript to be "safe" enough for regular browsing, we should follow the advice from Tim Brown as mentioned in this article from gnucitizen on xss worms and mitigations controls.

Do I often browse with Firefox using NoScript, CookieSafe, and LocalRodeo? Yes, but really it's not good enough as these sorts of findings demonstrate. Can I detect attacks given the best-of-the-best in RNA or NSM (note: I do have FireKeeper installed)?

It's my opinion that the detection paradigm will not work... it certainly doesn't work for spam. Isn't the technology (regex, parsers, etc) in spam filtration the same in IDS/IPS? Actually, no, IDS/IPS requires a hell of a lot more complexity, more protocol support, etc - making it even less reliable!

People in charge of building software need to stop development and implement the ideas available in the recent DJB paper. Larger projects will need to look at formalizing requirements, abuse cases, and taking on risk-based security testing techniques.

To this end, I recently had the pleasure of attending the VERIFY 2007 conference in DC, where I saw Sean Barnum of Cigital speak about Leveraging Attack Patterns for Software Security Testing where he presented all of the current research from the CAPEC project at MITRE. I urge people to read his presentations available here.

I also had a chance to speak with Brenda Larcom (founding member of Octotrike) regarding her presentation on Privilege-Centric Security Analysis given at Toorcon 9 in San Diego. A good explanation of her recent presentation is available here.

One of the largest problems preventing popularity of these approaches (Larcom, Barnum, CAPEC, et al) may possibly be related to terminology. "Privilege-centric", "risk-modeling", "risk-based security testing", "attack-modeling" and the Microsoft favorite, "threat-modeling" (which Sean Barnum explains is really a part of Architectural Risk Analysis as explained in that document) - all these terms tend to muddle up and confuse the industry.
Hi dre,

You said

"Can I detect attacks given the best-of-the-best in RNA or NSM (note: I do have FireKeeper installed)?

It's my opinion that the detection paradigm will not work... it certainly doesn't work for spam. Isn't the technology (regex, parsers, etc) in spam filtration the same in IDS/IPS? Actually, no, IDS/IPS requires a hell of a lot more complexity, more protocol support, etc - making it even less reliable!"

I am not proposing that detection is the answer to this problem. Rather, I propose visibility as a means to see if you are affected by it. If at some point we realize that a class of activity is so opaque that it is not possible to see it in action, that should be a big indicator that it's a problem. It's an application of my "building visibility in" idea.
Dre, I forgot to say that was a really good post -- lots of good ideas and references.
Excellent post, Richard (and Dre). It was one of those sipping-the-coffee-spitting-onto-keyboard moments when I read your post about jar and data protocols. "WHAAAAAT??"
dre said…
Rather, I propose visibility as a means to see if you are affected by it

@Richard: You have a really good point here - visibility can really help to learn how real-world attacks work, as well as to sometimes identify unknown attacks.

I'm not sure that visibility requires a hefty investment in infrastructure to do this, however. In my mind - you don't need line-rate packet capture devices with taps on every upstream hard-wired Ethernet port and AirDefense at every corner of the office. You can simply take samples of traffic using tcpdump or Wireshark from cheap mirror ports. Add some old laptops with AirPcap and rpcapd.

For attacks of this nature - I would rather just simply note how your organization is affected by it (a proof of concept in a lab should be able to demonstrate this). If everyone running Firefox (35% of your organization) is vulnerable - then some sort of scheduled protection should go into place if the software-provider does not plan on providing a patch/upgrade.

I prefer an actionable prevention plan vs. a "sit-and-wait" detection approach.
dre,

Visibility, like "security," is cheaper if designed into the architecture and not bolted on. When visibility is an afterthought it can be more expensive.

While I think sampling is better than nothing, using it as an ultra-cheap collection method is not what I would recommend for anyone who is serious about this problem.

I'm all for prevention -- the more the better. The problem comes when you don't know you are supposed to be preventing. This is the whole point of NSM. It's not just "wait and see." It's "because I didn't even know what I was supposed to be waiting or seeing, I kept track of what happened in my company and now I have some data for investigation purposes."
PaulM said…
I think your best bet against the unknown class of vulnerability or, as is somewhat the case with pdp architect's new advisories, unknown attack surface, is monitoring for post-attack behavior.

Because while there was nothing looking for (and it will still be a week or so before there's anything that's good at parsing) URIs containing jar: tags, the end result is the same. It's XSS.

If you have a published app that could be exploited, monitor web server logs for shady stuff that your log parser pukes on or just looks out of place.

Or if you have users, look for browser-exploity things like errant .exe downloads, SMTP traffic from DHCP ranges, odd P2P traffic, etc.

Because while attacks change rapidly, the underlying activity surrounding certain classes of vulnerability change much more slowly. By focusing on these things, our response team now sends 0day to the AV/IDS vendors on a weekly basis.

PS - Is it pick on ISS week? :-)
Paulm,

Definitely -- I need to blog this, but in brief I see three orders of monitoring:

1. First: Recon and exploitation (traditional IDS/IPS world)
2. Second: Reinforcement, consolidation, and pillage
3. Third: Consequences (company data on P2P networks, posted on .cn Web server, company IP in botnet C&C channel, etc.)
Anonymous said…
very good. thx guys!
Anonymous said…
Richard,

This may be off topic, apologies in advance.

Any thoughts on the recent burglary at CIHost's Chicago datacenter ? (http://www.theregister.co.uk/2007/11/02/chicaco_datacenter_breaches/)


Should datacenters also have armed guards (like banks) ?
Vivek,

No comment -- others have covered this.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4