Black Hat Federal 2006 Wrap-Up, Part 5

Please see part 1 for an introduction if you are reading this article separately.

Next I heard Stefano Zanero discuss problems with testing intrusion detection systems. He said that researchers prefer objective means with absolute results, while users prefer subjective means with relative results. This drives the "false positive" debate. Researchers see false positives as failures of the IDS engine to work properly, while users see any problem as the fault of the whole system.

Stefano mentioned work done by Giovannii Vigna and others on the Python-based Sploit, which creates exploit templates and mutant operators to test IDS'. He also cited a ICSA Labs project that doesn't appear to have made much progress developing IDS testing methodologies. Stefano said that good IDS tests must include background traffic; running an exploit on a quiet network is a waste of time. Stefano is developing a test bed for network traffic generation in the context of testing IDS'. He lamented there was no "zoo list" of attacks currently seen in the wild, as is the case with viruses and related malware.

Stefano claimed that vendors who say they perform "zero day detection" are really "detecting new attacks against old vulnerabilities." I don't necessarily agree with this, and I asked him about "vulnerability filter"-type rules. He said those are a hybrid of misuse and anomaly detection. Stefano declared that vendors who claim to perform "anomaly detection" are usually doing protocol anomaly detection, meaning they identify odd characteristics in protocols and not odd traffic in aggregate. He reminded the audience of Bob Colwell's paper If You Didn't Test It, It Doesn't Work. He also said that vendor claims must be backed up by repeatable testing methodologies. Stefano made the point that a scientist could never publish a paper that offered unsubstantiated claims.

I spoke with Stefano briefly and was happy to hear he uses my first book as a text for his undergraduate security students.

After Stefano's talk I listened to Halvar Flake explain attacks on uninitialized local variables. I admit I lost him about half way through his talk. Here is what I can relate. Non-initialized stack variables are regions on the stack that have not been initialized before being used. The trick is _not_ to insert code into these variables to be later executed, but to _control_ these values (as they might contain array indices or pointers). These can then be abused to gain control. The problem revolves around the fact that the stack is not cleaned when items are popped off, for performance reasons. Ok, that's about it. [Note: thanks to Halvar below for correcting this text!]

Here are a few general thoughts on the talk. Twice Halvar noted that finding 0-days is not as easy as it was before. Bugs are getting harder to exploit. In order to test new exploitation methods, Halvar can't simply find 20 0-days in an application and run his exploits against those vulnerabilities. He also can't write simple yet flawed code snippets as test against those, since they do not adequately reflect the complexity of modern applications. What he ends up doing is "patching in" flawed code into existing applications. That way he ends up with a complex program with known problems, against which he can try novel exploitation methods.

Halvar made heavy use of graphical depictions of code paths, as shown by his company's product BinNavi. This reminded me of the ShmooCon reverse engineering BoF, where many of the younger guns expressed their interest in graphical tools. As the problem of understanding complex applications only grows, I see these graphical tools as being indispensible for seeing the bigger picture.

For points of future research, Halvar wonder if there were uninitialized heap variables that could be exploited. He said that complexity cuts two ways in exploit development. Complex applications sometimes give intruders more freedom of manuever. They also may exploitation more difficult because the order in which an application allocates memory often matters. Halvar mentioned that felinemenace.org/~mercy addressed the same subject as his talk.

Robert Graham, chief scientist of ISS, gave the last talk I saw. He discussed the security of Supervisory Control And Data Acquisition (SCADA) systems. SCADA systems control power, water, communication, and other utilities. Robert does domestic and foreign penetration testing of SCADA systems, and he included a dozen case studies in his talk.

For example, he mentioned that the Blaster worm shut down domestic oil production for several days after a worker connected an infected laptop into a diagnostic network! Who needs a hurricane? Robert also told how a desktop negotiation for a pen test resulted in a leap from the corporate conference room, via wireless to a lab network, via Ethernet to the office network, through a dual-home Solaris box to the SCADA network. At that point the prospective client said "Please stop." (ISS received approval from the client to make each step, I should note.) In another case, Robert's team found a lone Windows PC sitting in a remote unlocked shed that had complete connectivity via CDPD modem to a SCADA network.

The broad outline of his conclusions include:

  • Patches are basically forbidden on SCADA equipment.

  • SCADA systems require little or no authentication.

  • Little or not SCADA traffic is encrypted.

  • Despite SCADA industry claims, SCADA systems are publicly accessible from the Internet, wireless networks, and dial-up. The "air gap" is a myth.

  • There is little to no logging of activity on the SCADA networks.


In sum, Robert said that SCADA "executives are living in a dreamworld." They provide network diagrams with "convenient ommissions" of links between office and SCADA/production segments. They are replacing legacy, dumb, RS-232 or 900 MHz-based devices with WinCE or Linux, Ethernet or 802.11-based devices. Attackers can jump from the Internet, to the DMZ, to the office network, to the SCADA network. They do not have to be "geniuses" to figure out how to operate SCADA equipment. The manuals they need are either online or on the company's own open file servers. SCADA protocols tend to be firewall-unfriendly, as they often need random ports to communicate. SCADA protocols like the Inter Control Center Protocol (ICCP) or OLE Process Control (OPC, a DCOM-based Microsoft protocol) are brittle. OPC, for example, relies on ASN.1.

At the end of the talk Robert said there was no need to panic. I asked "Why?" and so did Brian Krebs. Robert noted that SCADA industry threat models point to "accidents, not determined adversaries." That is a recipe for disaster. The SCADA network reminds me of the Air Force in the early 1990s. I will probably have more to say on that in a future article.

I hope you enjoyed this conference wrap-up. I look forward to your comments.

Comments

Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
Mr. Zanero is 100% incorrect - for example, Arbor's Peakflow SP does in fact make use of NetFlow telemetry to perform statistical anomaly-detection based upon pps/bps/source-dest pairs, and so forth. Their Peakflow/X system performs behavioral anomaly-detection, modeling communications relationships and then spotting deviations from same (also using NetFlow telemetry).
Anonymous said…
You don't happen to have contact information or a link to the presentation for Robert Graham do you? I'd love to find out more about his speach on SCADA security.
Sorry, I have neither.

Richard
Anonymous said…
there was a talk about scada at toorcon this year

http://toorcon.org/2005/slides/mgrimes/
http://toorcon.org/2005/conference.html?id=16
Anonymous said…
In fact, mr. Dobbins is right - and I am, too :)

During my talk, I specifically quoted Arbor - and Lancope, but I haven't seen their system at work - as examples of people really doing anomaly detection, or at least trying to.

Richard (btw: thanks for mentioning my talk :) has simply reported the short version: there's a lot of people claiming to do "zero-day" protection with MISUSE based systems, which is obviously false.

Hope this clears up the matter,
Stefano
halvar.flake said…
A short correction: Non-initialized stack variables are regions on the stack that have not been initialized before being used. The trick is _not_ to insert code into these variables to be later executed, but to _control_ these values (as they might contain array indices or pointers). These can then be abused to gain control.

Cheers,
Halvar
Halvar,

Wow, I didn't even get that first part right. Thanks for the correction!
Anonymous said…
there has been active talk on dailydave about the scada talk.

original mention: http://lists.immunitysec.com/pipermail/dailydave/2006-January/002867.html
new thread and pdf link (borken) : http://lists.immunitysec.com/pipermail/dailydave/2006-January/002861.html
latest message and link to a book on scada security: http://lists.immunitysec.com/pipermail/dailydave/2006-February/002885.html

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics