Dealing with Security Instrumentation Failures

I noticed three interesting blog posts that address security instrumentation failures.

First, security software developer Charles Smutz posted Flushing Out Leaky Taps:

How many packets does your tapping infrastructure drop before ever reaching your network monitoring devices? How do you know?

I’ve seen too many environments where tapping problems have caused network monitoring tools to provide incorrect or incomplete results. Often these issues last for months or years without being discovered, if ever...

One thing to keep in mind when worrying about loss due to tapping is that you should probably solve, or at least quantify, any packet loss inside your network monitoring devices before you worry about packet loss in the taps. You need to have strong confidence in the accuracy of your network monitoring devices before you use data from them to debug loss by your taps. Remember, in most network monitoring systems there are multiple places where packet loss is reported...

I’m not going to discuss in detail the many things that can go wrong in getting packets from your network to a network monitoring tool... I will focus largely on the resulting symptoms and how to detect, and to some degree, quantify them. I’m going to focus on two very common cases: low volume packet loss and unidirectional (simplex) visibility.


Read Charles' post to learn ways he deals with these issues.

Next I'd like to point to this post by the West Point Information Technology Operations Center on Misconfiguration Issue of NSA SPAN Port:

Thanks to the input we have already received on the 2009 CDX dataset, we have identified an issue in the way the NSA switch was configured. Specifically, we believe the span port from which our capture node was placed was configured for unidirectional listening. This resulted in our capture node only "hearing" received traffic from the red cell.

Doh. This is a good reminder to test your captures, as Charles recommends!

Finally, Alec Waters discusses weaknesses in SIEMs in his post Si(EM)lent Witness:

[H]ow can we convince someone that the evidence we are presenting is a true and accurate account of a given event, especially in the case where there is little or no evidence from other sources...

D]idn’t I say that vendors went to great lengths to prevent tampering? They do, but these measures only protect the information on the device already. What if I can contaminate the evidence before it’s under the SIEM’s protection?

The bulk of the information received by an SIEM box comes over UDP, so it’s reasonably easy to spoof a sender’s IP address; this is usually the sole means at the SIEM’s disposal to determine the origin of the message. Also, the messages themselves (syslog, SNMP trap, netflow, etc.) have very little provenance – there’s little or no sender authentication or integrity checking.

Both of these mean it’s comparatively straightforward for an attacker to send, for example, a syslog message that appears to have come from a legitimate server when it’s actually come from somewhere else.

In short, we can’t be certain where the messages came from or that their content is genuine.


Read Alec's post for additional thoughts on the validity of messages sent to SIEMs.

Comments

Alex Raitz said…
Alec Waters makes a good point with regard to integrity and non-repudiation of inbound messages. Syslog is convenient and easy, two things usually antithetic to security.

In a recent post on Splunk blogs, I highlighted that we (Splunk) can use certificate-based authentication between forwarder and receiver to provide authentication and proof-of-origin. Honestly, I wish more customers used the feature :)
Anonymous said…
Good references, Richard. I'd like to add that data validation testing in general is a key step in implementing and maintaining any IT system. IME, most projects involving security appliances and software reveal fundamental errors in the product causing data loss that have clearly been there for years. This says to me that no other customers/users had yet performed rigorous validation testing, suggesting this engineering lapse is more of an epidemic than an occasional oversight by IT staff.

/soapbox
Anonymous said…
There are log management products out there that support syslog over SSL. I don't want to look like a vendor shill so I won't say who, but it does exist.
Anonymous said…
I know a while back my professor was working with a QA gentleman from Fluke on a research project. It involved this very topic of verifying the integrity of captured data when it came to monitoring and network forensic investigations and the challenges of potentially proving that this data submitted in a court of law was valid evidence. And in my experience, Mike has it right. Most don't care, test for it, or even think about it. As long as infrastructure is up and running, packets are being passed, events are being forwarded, etc. that's all that matters to some.
Alec Waters said…
Hi Alex - mutual certificate authentication will certainly give some assurance of the origin of messages. Others have suggested the use of IPSec between log source and collector to accomplish the same goal.

I'm not sure these techniques cover all the bases though.

My doomsday scenarios of log contamination all implied some kind of insider abuse (if your SIEM takes events from world+dog via the Internet then you deserve everything you get :) ).

Given insider abuse, the miscreant could just cause a certificate-authenticated syslog source to send any message they wanted via the logger command:

myProxy:~# logger Bob from accounting is downloading pr0n again! Call HR now!!

The syslog message will be robustly authenticated by the forwarder, but it doesn't change the fact that the message is completely bogus.

alec
Alex Raitz said…
Alec,

That is certainly another abuse case worth considering.

'logger' will in most cases display the user that issued the message in the logged event itself:

Jun 28 10:51:51 qa-server araitz: it sucks that by default I can write to logger as a non-root user

If that fails, hopefully the shell history file is being monitored, thus providing us with a bread crumb back to the insider.

Either way, this assumes that monitoring is properly implemented and that analysts have the work-flow and know-how to put 1 + 1 together (and we all know how dangerous such assumptions are).

Still, your point that mutual certificate authentication (let alone encryption) doesn't solve a lot of abuse cases is a good one.

One point I may not agree with is regarding the perceived consequences of having SIEM and other logging products external-facing. Putting the whole cloud discussion aside, there must be some degree of external exposure if one is interested in collecting data/events from laptops and wireless devices.

-Alex
Alex Raitz said…
I just had a chance to read Charles Smutz article top-to-bottom, and I think is a "must read" for anyone instrumenting packet capture in an enterprise environment. Great work!

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4